Re: Local types and instances

2018-04-29 Thread Edward Kmett
This isn't sound. 

You lose the global uniqueness of instance resolution, which is a key part of 
how libraries like Data.Set and Data.Map move the constraints from being 
carried around in the Set to the actual use sites. With "local" instances it is 
very easy to construct such a map in one local context and use it in another 
with a different instance. 

This issue is mentioned at the end the very first paper in which typeclasses 
were introduced as losing principal typings when the definitions aren't top 
level.

The reify/reflect approach from the reflection package allows this sort of 
thing without lose of principal typing through generativity. This lets you 
write instances that depend on values in scope by reifying the value as a fresh 
type that has an associated instance that can be used to get your value back. 
The type variable generated is always "fresh" and doesn't violate any open 
world assumptions or have any concerns like the above Set/Map situation.

-Edward

> On Apr 29, 2018, at 6:28 PM, M Farkas-Dyck  wrote:
> 
> Idea: Allow local type and instance declarations, i.e. in `let` or `where` 
> clause.
> 
> Primary motivation: defining `ordNubBy` and such, i.e. functions which take 
> an effective class dictionary
> 
> In Haskell now, we can readily write `ordNubOn :: Ord b => (a -> b) -> [a] -> 
> [a]` in terms of `Set` [0], but not `ordNubBy :: (a -> a -> Ordering) -> [a] 
> -> [a]`, as `Set` requires an `Ord` instance. This is for good reason — 
> incoherence would destroy the guaranties of `Set` — but in the case of 
> `ordNubBy`, the `Set` would never escape, so it's fine. I needed `ordNubBy` 
> in a past job, so we actually copied much of the `Set` code and modified it 
> to take an `(a -> a -> Ordering)` rather than have an `Ord` constraint, which 
> works but is unfortunate. This proposal would allow the following:
> 
> ```
> ordNubBy :: (a -> a -> Ordering) -> [a] -> [a]
> ordNubBy f = let newtype T = T { unT :: a }
> instance Ord T where compare = f
> in fmap unT . ordNub . fmap T
> ```
> 
> Secondary motivation: defining local utility types in general
> 
> Note: The primary motivation is a subcase of this, with local instances 
> defined in terms of local arguments.
> 
> It is sometimes convenient or necessary to define a utility type which is 
> only used in the scope of a single term. It would be nice to be able to 
> define this in a `let` or `where` clause rather than at top level, for the 
> same reason it is nice to be able to define helper functions there.
> 
> Semantics:
> My thought is the local type is unique to each use of the term it is defined 
> in, to not cause incoherence. I believe the implementation should be feasible 
> as typeclass constraints are lowered to dictionary arguments anyhow. But i am 
> neither a type theorist nor an expert in GHC so please point out any flaws in 
> my idea.
> 
> I'm also thinking the type of the term where the local type is defined is not 
> allowed to contain the local type. I'm not sure what the soundness 
> implications of allowing this (unique) type to escape would be, but it seems 
> like it might lead to confusing error messages when types which seem to have 
> the same name can't be unified, and generally trips my informal mental 
> footgun alarm.
> 
> Thoughts?
> 
> [0] https://gist.github.com/strake/333dfd697a1ade4fea69e6c36536fc16
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Remove eq and show from num class

2017-09-10 Thread Edward Kmett
On Sun, Sep 10, 2017 at 4:48 AM, Anthony Clayden <
anthony_clay...@clear.net.nz> wrote:

> > On Fri Sep 8 15:58:10 UTC 2017, Carter Schonwald wrote:
> >
> > I mostly wanted to confirm that we in fact will actually
> say yes
> > before doing the formal writtingup :)
>
> Seriously -- and please stop using smileys:
> you're right to be cautious.
> You need to rewrite the whole of Section 6.4
> (nearly 5 pages), starting with changing the title.
> And I think it'll be a struggle to clarify what applies
> to genuine numbers vs what applies to 'other stuff'.
> As Bardur says:
> > far from trivial to spec without reference to
> implementation details
>
> I think there's another way: leave Sec 6.4 largely
> unchanged.
> Add a short note that implementations might extend
> class `Num` to include non-numbers.
>
> GHC 'morally complies' to section 6.4 for Numbers.
> (i.e. all number types are members of `Num, Eq, Show`.)
>
> GHC has (or allows) other types in `Num` which are not
> numbers,
> so section 6.4 doesn't apply.
>
> I'm puzzled by Bardur's other comments:
> > On Fri Sep 8 16:16:54 UTC 2017, Bardur Arantsson wrote:
> >
> > There aren't really any widely used Haskell compilers
> > other than GHC ...
>
> That's true and sad and a weakness for Haskell
> in general. On this particular topic there are
> other compilers: GHC prior v7.4, Hugs.
>
> > and speccing for things that aren't actually used in
> practice ...
>
> But there's already a spec! namely the existing report.
> And `Eq`, `Show` for numbers are used heavily.
>
I think this proposal is not to remove `Eq, Show`
> from number types that already have them(?)
> But Carter's subject line does seem to be saying that,
> which is what alarmed me at first reading.
>

Smileys and process aside, all that is being proposed is ratifying the
actual behavior that GHC has had since 2011. Num would lose Eq and Show
superclasses in the report. This allows users to create instances like Num
a => Num (x -> a), overloaded integers for working with non-printable
expression types, infinite precision real number types that literally
cannot be safely compared for equality, etc. All of which are fairly common
things to talk about in haskell today.

Yes, a small section of the report would have to be rewritten slightly to
codify the change in report behavior, and that numeric literals in pattern
position need to incur an Eq constraint, which is what they do today in GHC.

Nobody is proposing removing any instances for types that have them, merely
capturing the existing behavior that having Num a does not imply Eq a and
Show a. There is simply no good reason other than historical accident for
it to do so and many good reasons for it to not.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: type (++) = (<>)

2017-07-03 Thread Edward Kmett
Note: I realize nobody is directly saying that we should use (++) instead
of (<>) in this conversation just yet, but I want to clear a few things up.

One of the early options when the operator (<>) was coined was to try to
say we should just generalize the type of (++) instead to make it mappend.
(Note: it originally was mplus, in a Haskell version long long ago, so it
keeps getting repurposed!) Unfortunately, this plan ran afoul of the fact
that the primary libraries using the (<>) notation at the time (pretty
printing libraries) also mixed it heavily with (++), exploiting the
different fixities involved. (Finding a decent fixity for (<>) was a huge
chunk of the conversation at the time.)

There is a deliberate fixity difference between (++) and (<>), a good chunk
of code out there mixes them that deals with pretty printing that would
break pretty horribly if we just outright removed (++), and trying to do a
visual search and replace for (++) with (<>) in light of them having
different fixities is a very error prone process, so we aren't currently
planning on deprecating the existing (++) operator any time soon. At least,
nobody has put a proposal to the core libraries committee to that end.

Since the call was made to make (<>) become the new operator, we ultimately
decided to leave (++) untouched, even though it could be generalized to
match (<>), for much the same reason that map still exists, despite there
being a more general fmap: Ultimately, there isn't a reasonable migration
plan to make (++) or map become the way you define the instance involved,
and at least this way the name duplication can be leveraged by the folks
who want a less polymorphic combinator.

Would the world be a little tidier without map or (++) hanging about? Sure.
But the hate mail levels in my inbox would skyrocket commensurately. ;)

-Edward

On Mon, Jul 3, 2017 at 5:01 PM, Erik de Castro Lopo 
wrote:

> Vassil Ognyanov Keremidchiev wrote:
>
> > What do you think of making (++) the same as (<>) so we could use ++ as
> > concatenation of any monoid, not just lists in Haskell 2020?
> > This will be more intuitive for beginners, too.
>
> Two symbolic operators that are synonymous seems a bit of a waste. I would
> much rather see (++) be deprecated in favour of (<>). At work we have a
> custom prelude which already does this.
>
> Erik
> --
> --
> Erik de Castro Lopo
> http://www.mega-nerd.com/
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Anything happening?

2016-07-20 Thread Edward Kmett
I'll be at Zurihac as well should you need input from the CLC side.

Sent from my iPhone

> On Jul 20, 2016, at 5:20 PM, David Luposchainsky via Haskell-prime 
>  wrote:
> 
>> On 19.07.2016 20:08, Michael Walker wrote:
>> I haven't seen anything on this mailing list for a little while, is
>> anything going on? Has the process by which proposals will be
>> submitted and worked out been solidified yet?
> 
> I’m planning to spend some time at ZuriHac this weekend thinking about 
> Haskell’,
> maybe getting the Report to build, making a few experimental changes. Oh, and
> talking to people mostly, so if you’re around... :-)
> 
> David/quchen
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Breaking Changes and Long Term Support Haskell

2015-10-22 Thread Edward Kmett
On Wed, Oct 21, 2015 at 8:42 PM, Gregory Collins 
wrote:

>
> On Wed, Oct 21, 2015 at 3:18 PM, Geoffrey Mainland 
> wrote:
>
>> My original email stated my underlying concern: we are losing valuable
>> members of the community not because of the technical decisions that are
>> being made, but because of the process by which they are being made.
>>
> [If] you're doing research you're on the treadmill, almost by definition,
> and you're delighted that we're finally making some rapid progress on
> fixing up some of the longstanding warts.
>
> If you're a practitioner, you are interested in using Haskell for, y'know,
> writing programs. You're probably in one of two camps: you're in "green
> field" mode writing a lot of new code (early stage startups, prototype
> work, etc), or you're maintaining/extending programs you've already written
> that are out "in the field" for you doing useful work. Laura Wingerd calls
> this the "annealing temperature" of software, and I think this is a nice
> metaphor to describe it. How tolerant you are of ecosystem churn depends on
> what your temperature is: and I think it should be obvious to everyone that
> Haskell having "success" for programming work would mean that lots of
> useful and correct programs get written, so everyone who is in the former
> camp will cool over time to join the latter.
>

I've made the point before and I don't really want to belabor it: our de
> facto collective posture towards breaking stuff, especially in the past few
> years, has been extremely permissive, and this alienates people who are
> maintaining working programs.
>

Even among people who purported to be teaching Haskell or using Haskell
today in industry the margin of preference for the concrete FTP proposal
was ~79%. This was considerably higher than I expected in two senses. One:
there were a lot more people who claimed to be in one of those two roles
than I expected by far, and two: their appetite for change was higher than
I expected. I initially expected to see a stronger "academic vs. industry"
split in the poll, but the groups were only distinguishable by a few
percentage point delta, so while I expected roughly the end percentage of
the poll, based on the year prior I'd spent running around the planet to
user group meetings and the like, I expected it mostly because I expected
more hobbyists and less support among industrialists.

>
I'm actually firmly of the belief that the existing committee doesn't
> really have process issues, and in fact, that often it's been pretty
> careful to minimize the impact of the changes it wants to make. As others
> have pointed out, lots of the churn actually comes from platform libraries,
> which are out of the purview of this group.
>

Historically we've had a bit of a split personality on this front. Nothing
that touches the Prelude had changed in 17 years. On the other hand the
platform libraries had maintained a pretty heavy rolling wave of breakage
the entire time I've been around in the community. On a more experimental
feature front, I've lost count of the number of different things we've done
to Typeable or template-haskell.


> All I'm saying is that if we want to appeal to or cater to working
> software engineers, we have to be a lot less cavalier about causing more
> work for them, and we need to prize stability of the core infrastructure
> more highly. That'd be a broader cultural change, and that goes beyond
> process: it's policy.
>

The way things are shaping up, we've had 17 years of rock solid stability,
1 release that incorporated changes that were designed to minimize impact,
to the point that the majority of the objections against them are of the
form where people would prefer that we broke _more_ code, to get a more
sensible state. Going forward, it looks like the next 2 GHC releases will
have basically nothing affecting the Prelude, and there will be another
punctuation in the equilibrium around 8.4 as the next set of changes kicks
in over 8.4 and 8.6 That gives 2 years worth of advance notice of pending
changes, and a pretty strong guarantee from the committee that you should
be able to maintain code with a 3 release window without running afoul of
warnings or needing CPP.

So, out of curiosity, what additional stability policy is it that you seek?

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Breaking Changes and Long Term Support Haskell

2015-10-22 Thread Edward Kmett
On Thu, Oct 22, 2015 at 2:04 AM, Taru Karttunen  wrote:

> E.g. if
>
> A) Most of Hackage (including dependencies) compiles with new GHC.
> (stack & stackage helps somewhat)
>
> B) There is an automated tool that can be used to fix most code
> to compile with new versions of GHC without warnings or CPP.
>
> C) Hackage displays vocally what works with which versions of
> GHC (Status reports do help somewhat)
>


> Then I think much of the complaints would go away.


If we had those things, indeed they would!

However, beyond A (GHC 7.10 was tested more extensively against
hackage/stackage than any previous release of Haskell by far!), the others
require various degrees of engineering effort, including some way to deal
with refactoring code that already has CPP in it, more extensive build-bot
services, etc. and those sort of non-trivial artifacts just haven't been
forthcoming. =/

I would be very happy if those things showed up, however.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Breaking Changes and Long Term Support Haskell

2015-10-22 Thread Edward Kmett
On Thu, Oct 22, 2015 at 1:37 PM, Gregory Collins <g...@gregorycollins.net>
wrote:

>
> On Wed, Oct 21, 2015 at 11:40 PM, Edward Kmett <ekm...@gmail.com> wrote:
>
>> All I'm saying is that if we want to appeal to or cater to working
>>> software engineers, we have to be a lot less cavalier about causing more
>>> work for them, and we need to prize stability of the core infrastructure
>>> more highly. That'd be a broader cultural change, and that goes beyond
>>> process: it's policy.
>>>
>>
>> The way things are shaping up, we've had 17 years of rock solid stability
>>
>
> I have >95% confidence that all of the C++ programs I wrote 15 years ago
> would build and work if I dusted them off and typed "make" today. I have
> Haskell programs I wrote last year that I probably couldn't say that about.
>
> So I don't buy that, at all, at least if we're discussing the topic of the
> stability of the core infrastructure in general rather than changes being
> made to the Prelude. It's been possible to write to Haskell 98 without too
> much breakage, yes, but almost nobody actually does that; they write to
> Haskell as defined by GHC + the boot libraries + Haskell platform +
> Hackage, IMO with decreasing expectations of stability for each. The core
> set breaks a lot.
>

I definitely agree here.

We have a lot of libraries in the Haskell Platform that have fairly liberal
change policies. On the other hand, we have a policy of "maintainer
decides" around issues. This yields a fairly decentralized change
management process, with different maintainers who have different views.
The Platform gives us a central pool of packages that are generally the
"best of breed" in their respective spaces, but gives us few stability
guarantees.

Heck, every release I wind up having to change whatever code I have that
uses template-haskell or Typeable.

On the other hand, it isn't clear with a larger "core" platform with harder
stability guarantees that we have a volunteer force that can and would sign
up for the long slog of maintenance without that level of autonomy.


> We definitely shouldn't adopt a posture to breaking changes as
> conservative as the C++ committee's, and literally nobody in the Haskell
> community is arguing against breaking changes in general, but as I've
> pointed out, most of these breakages could have been avoided with more
> careful engineering, and indeed, on many occasions the argument has been
> made and it's fallen on deaf ears.
>

I would argue that there are individual maintainers that give lie to that
statement. In many ways Johan himself has served as a counter-example
there. The libraries he has maintained have acted as a form of bedrock with
long maintenance windows. On the other hand, the burden of maintaining that
stability seems to have ultimately burned him out.

They can speak for themselves but I think for Mark and Johan, this is a
> "straw that broke the camel's back" issue rather than anything to do with
> the merits of removing return from Monad. I think the blowback just happens
> to be so much stronger on MRP because the breaking change is so close to
> the core of the language, and the benefits are so nebulous. fixing an
> aesthetic problem has almost zero practical value
>

I personally don't care about the return side of the equation.

Herbert's MRP proposal was an attempt by him to finish out the changes
started by AMP so that a future Haskell Report can read cleanly. Past
reports have been remarkably free of historical baggage.

I'd personally readily sacrifice "progress" there in the interest of
harmony. Herbert as haskell-prime chair possibly feels differently.


> and ">> could be slightly more efficient for some monads" is pretty weak
> sauce.
>

The issue right now around (>>) is that it has knock-on effects that run
pretty far and wide. "Weak sauce" or not, it means through second order
consequences that we can't move the useless mapM and sequence to the top
level from their current status as memberrs of Traversable and that users
have to care about which of two provably equivalent things that they are
using, at all times.

It means that code that calls mapM will be less efficient and that mapM_
behaves in a manner with rather radically different space and time behavior
than mapM today and not in a consistently good way.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: Breaking Changes and Long Term Support Haskell

2015-10-22 Thread Edward Kmett
On Thu, Oct 22, 2015 at 11:36 AM, Geoffrey Mainland <mainl...@apeiron.net>
wrote:

> On 10/22/2015 11:02 AM, Matthias Hörmann wrote:
> > I would say that the need to import Control.Applicative in virtually
> > every module manually
> > definitely caused some pain before AMP.
>
> In this particular case, there is a trade off between breaking code on
> the one hand and having to write some import statements on the other. I
> find writing some extra imports less painful than breaking (other
> people's and my) code, but the other position is defensible as well. I
> sense that I am in the minority, at least on the libraries list.
>
> > I would also argue that a
> > non-negligible amount
> > of effort goes into teaching the warts, the reasons for the warts and
> > how to work around them.
>
> Which wart(s) in particular? All of them? Does having return (and (>>))
> in Monad make teaching more difficult?
>

Having (>>) means that we have hundreds of monads out there where (>>) has
been optimized, but (*>) has not.

If I were working alone, AMP wouldn't be a huge deal. I could fix the
> code for 7.10 compatibility, but then unless everyone switches to 7.10,
> changes to the codebase made by someone using 7.8, e.g., defining a new
> Monad instance, could break things on 7.10 again. It's easier to stick
> with 7.8. Any time spent dealing with compatibility issues is time not
> spent writing actual code.
>

In the open source world many of us just fire off our code to travis-ci and
get it to build with a dozen different compiler versions. I maintain a lot
of code that supports things back to 7.0 and forward to HEAD this way.


> I outlined one possible path to avoid this kind of issue: spend more
> time thinking about ways to maintain compatibility. We had proposals for
> doing this with AMP.
>

And on the other hand we also had a concrete proposal that didn't require
language changes that was ridiculously popular. People had been talking
about Applicative as a superclass of Monad for a decade before we finally
acted upon the AMP. People had been talking about superclass defaulting for
a decade. When do you cut off discussion and ship the proposal that has
overwhelming support? If there is no process that enables this you can
stall the process indefinitely by raising objections of this form. Such a
situation is not without costs all its own.

-Edward


> Cheers,
> Geoff
>
> >
> > On Thu, Oct 22, 2015 at 3:29 PM, Geoffrey Mainland <mainl...@apeiron.net>
> wrote:
> >> On 10/22/2015 02:40 AM, Edward Kmett wrote:
> >>> On Wed, Oct 21, 2015 at 8:42 PM, Gregory Collins
> >>> <g...@gregorycollins.net <mailto:g...@gregorycollins.net>> wrote:
> >>>
> >>>
> >>> On Wed, Oct 21, 2015 at 3:18 PM, Geoffrey Mainland
> >>> <mainl...@apeiron.net <mailto:mainl...@apeiron.net>> wrote:
> >>>
> >>> My original email stated my underlying concern: we are losing
> >>> valuable
> >>> members of the community not because of the technical
> >>> decisions that are
> >>> being made, but because of the process by which they are being
> >>> made.
> >>>
> >>> [If] you're doing research you're on the treadmill, almost by
> >>> definition, and you're delighted that we're finally making some
> >>> rapid progress on fixing up some of the longstanding warts.
> >>>
> >>> If you're a practitioner, you are interested in using Haskell for,
> >>> y'know, writing programs. You're probably in one of two camps:
> >>> you're in "green field" mode writing a lot of new code (early
> >>> stage startups, prototype work, etc), or you're
> >>> maintaining/extending programs you've already written that are out
> >>> "in the field" for you doing useful work. Laura Wingerd calls this
> >>> the "annealing temperature" of software, and I think this is a
> >>> nice metaphor to describe it. How tolerant you are of ecosystem
> >>> churn depends on what your temperature is: and I think it should
> >>> be obvious to everyone that Haskell having "success" for
> >>> programming work would mean that lots of useful and correct
> >>> programs get written, so everyone who is in the former camp will
> >>> cool over time to join the latter.
> >>>
> >>>
> >>> I've made the point before and I don't really want to belabor it

Re: Breaking Changes and Long Term Support Haskell

2015-10-22 Thread Edward Kmett
On Thu, Oct 22, 2015 at 12:20 PM, Mario Blažević 
wrote:

> On 15-10-22 09:29 AM, Geoffrey Mainland wrote:
>
>> ...
>>
>> 1) What is the master plan, and where is it documented, even if this
>> document is not up to the standard of a proposal? What is the final
>> target, and when might we expect it to be reached? What is in the
>> pipeline after MRP?
>>
>> Relatedly, guidance on how to write code now so that it will be
>> compatible with future changes helps mitigate the stability issue.
>>
>
> I have been fully in favour of all the proposals implemented so
> far, and I think that having an explicit master plan would be a great idea.
> It would address some of the process-related objections that have been
> raised, and it would provide a fixed long-term target that would be much
> easier to make the whole community aware of and contribute to.
>
> For that purpose, the master plan should be advertised directly on
> the front page of haskell.org. Once we have it settled and agreed, the
> purpose of the base-library commitee would essentially become to figure out
> the details like the timeline and code migration path. One thing they
> wouldn't need to worry about is whether anybody disagrees with their goals.
>
>
> 2) How can I write code that makes use of the Prelude so that it will
>> work with every new GHC release over the next 3 years? 5 years? For
>> example, how can I write a Monad instance now, knowing the changes that
>> are coming, so that the instance will work with every new GHC release
>> for the next 3 years? 5 years? If the answer is "you can't," then when
>> might I be able to do such a thing? As of 8.4? 8.6? I'm embarrassed to
>> say I don't know the answer!
>>
>
> From the discussions so far it appears that the answer for 3 years
> (or at least the next 3 GHC releases) would be to write the code that works
> with the current GHC and base, but this policy has not been codified
> anywhere yet. Knowing the upcoming changes doesn't help with making your
> code any more robust, and I think that's a shame. We could have a
> two-pronged policy:
>
> - code that works and compiles with the latest GHC with no *warnings* will
> continue to work and compile with no *errors* with the following 2
> releases, and
> - code that also follows the forward-compatibility recommendations current
> for that version of GHC will continue to work and compile with no *errors*
> with the following 4 releases.
>

We have adopted a "3 release policy" facing backwards, not forwards.
However, all proposals currently under discussion actually meet a stronger
condition, a 3 release policy that you can slide both forward and backwards
to pick the 3 releases you want to be compatible with without using CPP. It
also appears that all of the changes that we happen to have in the wings

https://ghc.haskell.org/trac/ghc/wiki/Status/BaseLibrary

comply with both of your goals here. However, I hesitate to say that we can
simultaneously meet this goal and the 3 release policy facing backwards
_and_ sufficient notification in all situations even ones we can't foresee
today. As a guideline? Sure. If we have two plans that can reach the same
end-goal and one complies and the other doesn't, I'd say we should favor
the plan that gives more notice and assurance. However, this also needs to
be tempered against the number of years folks suffer the pain of having in
an inconsistent intermediate state. (e.g. having generalized combinators in
Data.List today)

The forward-compatibility recommendations would become a part of
> the online GHC documentation so nobody complains they didn't know about
> them. Personally, I'd prefer if the recommendations were built into the
> compiler itself as a new class of warnings, but then (a) some people would
> insist on turning them on together with -Werror and then complain when
> their builds break and (b) this would increase the pressure on GHC
> implementors.


The current discussion is centering around adding a -Wcompat flag that
warns of changes that you maybe can't yet implement in a way that would be
backwards compatible with a 3 release backwards-facing window, but which
will eventually cause issues.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] Committee M.O. Change Proposals

2015-10-21 Thread Edward Kmett
The committee was formed from a pool of suggestions supplied to SPJ that
represented a fairly wide cross-section of the community.

Simon initially offered both myself and Johan Tibell the role of co-chairs.
Johan ultimately declined.

In the end, putting perhaps too simple a spin on it, the initial committee
was selected: Michael Snoyman for commercial interest, Mark Lentczner
representing the needs of the Platform and implementation concerns, Brent
Yorgey on the theory side, Doug Beardsley representing practitioners,
Joachim Breitner had expressed interest in working on split base, which at
the time was a much more active concern, Dan Doel represented a decent
balance of theory and practice.

Since then we had two slots open up on the committee, and precisely two
self-nominations to fill them, which rather simplified the selection
process. Brent and Doug rotated out and Eric Mertens and Luite Stegeman
rotated in.

Technically, yes, we are self-selected going forward, based on the
precedent of the haskell.org committee and haskell-prime committees, but
you'll note this hasn't actually been a factor yet as there hasn't been any
decision point reached where that has affected a membership decision.

-Edward

On Wed, Oct 21, 2015 at 8:31 AM, Geoffrey Mainland 
wrote:

> On 10/21/2015 07:55 AM, Herbert Valerio Riedel wrote:
> > Hello, > > On 2015-10-21 at 02:39:57 +0200, Geoffrey Mainland wrote: > >
> [...]
> > >> In effect, only those who actively follow the libraries list have
> had a >> voice in these decisions. Maybe that is what the community
> wants. I hope >> not. How then can people like me (and Henrik and
> Graham) have a say >> without committing to actively following the
> libraries list? >>  >> We have a method to solve this: elected
> representatives. Right now the >> Core Libraries Committee elects its
> own members; perhaps it is time for >> that to change. > > [...] > >>
> Proposal 1: Move to community election of the members of the Core >>
> Libraries Committee. Yes, I know this would have its own issues. > > How
> exactly do public elections of representatives address the problem >
> that some people feel left out? Have you considered nominating yourself
> > or somebody else you have confidence in for the core libraries >
> committee? You'd still have to find somebody to represent your >
> interests, regardless of whether the committee is self-elected or >
> direct-elected. > > Here's some food for thought regarding language
> design by voting or its > indirect form via a directly elected language
> committee: > > Back in February there was a large-scale survey which
> resulted (see [2] > for more details) in a rather unequivocal 4:1
> majority *for* going > through with the otherwise controversial FTP
> implementation. If the > community elections would result in a similar
> spirit, you'd could easily > end up with a similarly 4:1 pro-change
> biased committee. Would you > consider that a well balanced committee
> formation?
>
> Thanks, all good points.
>
> It is quite possible that direct elections would produce the exact same
> committee. I wouldn't see that as a negative outcome at all! At least
> that committee would have been put in place by direct election; I would
> see that as strengthening their mandate.
>
> I am very much aware of the February survey. I wonder if Proposal 2, had
> it been in place at the time, would have increased participation in the
> survey.
>
> The recent kerfuffle has caught the attention of many people who don't
> normally follow the libraries list. Proposal 1 is an attempt to give
> them a voice. Yes, they would still need to find a candidate to
> represent their interests. If we moved to direct elections, I would
> consider running. However, my preference is that Proposal 3 go through
> in some form, in which case my main concern would be the Haskell Prime
> committee, and unfortunately nominations for that committee have already
> closed.
>
> >> Proposal 2: After a suitable period of discussion on the libraries
> list, >> the Core Libraries Committee will summarize the arguments for and
> >>
> against a proposal and post it, along with a (justified) preliminary >>
> decision, to a low-traffic, announce-only email list. After another >>
> suitable period of discussion, they will issue a final decision. What is
> >> a suitable period of time? Perhaps that depends on the properties of
> the >> proposal, such as whether it breaks backwards compatibility. > >
> That generally sounds like a good compromise, if this actually helps >
> reaching the otherwise unreachable parts of the community and have their
> > voices heard.
>
> My hope is that a low-volume mailing list would indeed reach a wider
> audience.
>
> >> Proposal 3: A decision regarding any proposal that significantly
> affects >> backwards compatibility is within the purview of the Haskell
> Prime
> >> Committee, not the Core Libraries Committee. > > I don't see how that
> would 

Re: [Haskell-cafe] MRP, 3-year-support-window, and the non-requirement of CPP

2015-10-10 Thread Edward Kmett
On Wed, Oct 7, 2015 at 3:35 AM, Herbert Valerio Riedel  wrote:

> --8<---cut here---start->8---
> import Control.Applicative as A (Applicative(..))
>
> data Maybe' a = Nothing' | Just' a
>
> instance Functor Maybe' where
> fmap f (Just' v) = Just' (f v)
> fmap _ Nothing'  = Nothing'
>
> instance A.Applicative Maybe' where
> pure = Just'
> f1 <*> f2   = f1 >>= \v1 -> f2 >>= (pure . v1)
>
> instance Monad Maybe' where
> Nothing' >>= _  = Nothing'
> Just' x  >>= f  = f x
>
> return = pure -- "deprecated" since GHC 7.10
> --8<---cut here---end--->8---
>
>
Alternately,

import Control.Applicative
import Prelude

data Maybe' a = Nothing' | Just' a

instance Functor Maybe' where
fmap f (Just' v) = Just' (f v)
fmap _ Nothing'  = Nothing'

instance Applicative Maybe' where



> -- hvr
> ___
> Haskell-prime mailing list
> Haskell-prime@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] MRP, 3-year-support-window, and the non-requirement of CPP

2015-10-10 Thread Edward Kmett
On Tue, Oct 6, 2015 at 3:02 PM, Malcolm Wallace 
wrote:

>
> On 6 Oct 2015, at 17:47, Herbert Valerio Riedel wrote:
> > At the risk of stating the obvious: I don't think it matters from which
> > group a given argument comes from as its validity doesn't depend on the
> > messenger.
>
> In that case, I think you are misunderstanding the relevance of Johan's
> argument here.  Let me try to phrase it differently.  Some people who can
> reasonably claim to have experience with million-line plus codebases are
> warning that this change is too disruptive, and makes maintenance harder
> than it ought to be.  On the other hand, of the people who say the change
> is not really disruptive, none of them have (yet?) made claims to have
> experience of the maintenance of extremely large-scale codebases.


Very well. Let me offer a view from the "other side of the fence."

I personally maintain about 1.3 million lines of Haskell, and over 120
packages on hackage. It took me less than a half a day to get everything
running with 7.10, and about two days to build -Wall clean. In that first
day I actually had to spend vastly more time fixing things related to
changes in Typeable, template-haskell and a tweaked corner case in the
typechecker than anything AMP/FTP related. In the end I had to add two type
signatures.

Most of the patches to go -Wall clean looked like

+#if __GLASGOW_HASKELL__ < 710
import Control.Applicative
import Data.Monoid
+#endif

Maybe 10% were more complicated.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] MRP, 3-year-support-window, and the non-requirement of CPP

2015-10-10 Thread Edward Kmett
The part of the MRP proposal that I actively care about because it fixes a
situation that *actually causes harm* is moving (>>) to the top level.

Why?

Right now (*>) and (>>) have different default definitions. This means that
code runs often with different asymptotics depending on which one you pick.
Folks often define one but not the other.

This means that the performance of mapM_ and traverse_ needlessly differ.
It means that we can't simply weaken the type constraint on mapM_ and
sequence_ to Applicative, it as a knock-on consequence it means we can't
migrate mapM and sequence out of Traversable to top level definitions and
thereby simply provide folks with more efficient parallelizable mapping
when they reach for the 'obvious tool'.

return itself lurking in the class doesn't matter to me all that much as it
doesn't break anybody's asymptotics and it already has a sensible
definition in terms of pure as a default, so effectively you can write code
as if MRP was already in effect today. It is a wart, but one that could be
burned off on however long a time table we want if we choose to proceed.

-Edward

On Wed, Oct 7, 2015 at 5:13 PM, Mark Lentczner 
wrote:

>
> On Wed, Oct 7, 2015 at 9:38 AM, Erik Hesselink 
> wrote:
>
>> While I don't think it detracts from your argument, it seems you
>> misread the original proposal. At no point will it remove `return`
>> completely. It would be moved out of the `Monad` class and be made
>> into a top-level definition instead, so you would still be able to use
>> it.
>>
>
> Then why bother?
> If you don't intend to regard code that uses "return" as old, out-dated,
> in need of updating, etc
> If you don't intend to correct people on #haskell to use pure instead of
> return...
> If you don't tsk tsk all mentions of it in books
> If you don't intend to actually deprecate it.
> Why bother?
>
> But seriously, why do you think that "you would still be able to use it"?
> That is true for only the simplest of code - and untrue for anyone who has
> a library that defines a Monad - or anyone who has a library that they want
> to keep "up to date". Do you really want to have a library where all your
> "how to use this" code has return in the examples? Shouldn't now be pure?
> Do I now need -XCPP just for Haddock? and my wiki page? And what gets shown
> in Hackage? This is just a nightmare for a huge number of libraries, and
> especially many commonly used ones.
>
> Why bother!
>
>
> ___
> Libraries mailing list
> librar...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries
>
>
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] MRP, 3-year-support-window, and the non-requirement of CPP

2015-10-10 Thread Edward Kmett
On Tue, Oct 6, 2015 at 1:41 PM, Sven Panne  wrote:

> 2015-10-06 18:47 GMT+02:00 Herbert Valerio Riedel :
>
>> [...] That's because -Wall-hygiene (w/o opting out of harmless) warnings
>>
> across multiple GHC versions is not considered a show-stopper.
>>
>
> That's your personal POV, I'm more leaning towards "-Wall -Werror". I've
> seen too many projects where neglecting warning over an extended period of
> time made fixing them basically impossible at the end. Anyway, I think that
> a sane ecosystem should allow *both* POVs, the sloppy one and the strict
> one.
>

Note: You haven't been able to upload a package that has -Werror turned on
in the cabal file for a couple of years now -- even if it is only turned on
on the test suite, so any -Werror discipline you choose to enforce is
purely local.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] MRP, 3-year-support-window, and the non-requirement of CPP

2015-10-10 Thread Edward Kmett
On Sat, Oct 10, 2015 at 4:12 PM, Yuras Shumovich <shumovi...@gmail.com>
wrote:

> On Sat, 2015-10-10 at 15:25 -0400, Edward Kmett wrote:
> > The part of the MRP proposal that I actively care about because it
> > fixes a
> > situation that *actually causes harm* is moving (>>) to the top
> > level.
>
> Sorry if I'm missing something, but moving (>>) is not part of the
> proposal. At least it is not mentioned on the wiki page:
>
> https://ghc.haskell.org/trac/ghc/wiki/Proposal/MonadOfNoReturn
>
> Is the wiki outdated?
>

It arose during the original thread discussing the MRP but wasn't included
in the 'proposal as written' that was sent out.

https://mail.haskell.org/pipermail/libraries/2015-September/026129.html

In many ways that proposal would do better 'on its own' than as part of the
MRP.

> return itself lurking in the class doesn't matter to me all that much
> > as it
> > doesn't break anybody's asymptotics and it already has a sensible
> > definition in terms of pure as a default, so effectively you can
> > write code
> > as if MRP was already in effect today. It is a wart, but one that
> > could be
> > burned off on however long a time table we want if we choose to
> > proceed.
>
> So the cost of not moving `return` to the top level is zero?
>
> For me the cost of moving it is pretty small, just an hour or two.
> Probably recompiling all the dependencies when switching to newer
> version of GHC will take longer. (Actually I'm still using 7.8 at
> work.) But the cost is definitely nonzero.
>
> The proposal (as written on the wiki page) provides two arguments for
> the change:
>
> There is no reason to include `return` into the next standard. That is
> true.


Nobody is saying that we should remove return from the language. The
proposal was to move it out of the class -- eventually. Potentially on a
very very long time line.

But we can leave `return` is `GHC` as a compiler specific extension for
> backward compatibility, can't we?
>

This is effectively the status quo. There is a default definition of return
in terms of pure today. The longer we wait the more tenable this proposal
gets in many ways as fewer and fewer people start trying to support
compilers versions below 7.10. Today isn't that day.

There are some niggling corner cases around viewing its continued existence
as a compiler "extension" though, even just around the behavior when you
import the class with Monad(..) you get more or less than you'd expect.

Could someone please clarify what is the cost of not moving `return` out of
> `Monad`?
>

The cost of doing nothing is maintaining a completely redundant member
inside the class for all time and an ever-so-slightly more expensive
dictionaries for Monad, so retaining return in the class does no real harm
operationally.

While I'm personally somewhat in favor of its eventual migration on
correctness grounds and believe it'd be nice to be able to justify the
state of the world as more than a series of historical accidents when I put
on my libraries committee hat I have concerns.

I'm inclined to say at the least that IF we do decide to proceed on this,
at least the return component should be on a long time horizon, with a
clock tied to the release of a standard, say a Haskell2020. I stress IF,
because I haven't had a chance to go through and do any sort of detailed
tally or poll to get a sense of if there is a sufficient mandate. There is
enough of a ruckus being raised that it is worth proceeding cautiously if
we proceed at all.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime


Proposal: Treat OtherLetter as lower case in the grammar

2014-08-12 Thread Edward Kmett
Back in 2008 or so, GHC changed the behavior of unicode characters in the
parser that parse as OtherLetter to make them parse as lower case so that
languages like Japanese that lack case could be used in identifier names:

https://ghc.haskell.org/trac/ghc/ticket/1103

In a recent thread on reddit Lennart Augustsson pointed out that this
change
was never backported to Haskell'.

http://www.reddit.com/r/haskell/comments/2dce3d/%E0%B2%A0_%E0%B2%A0_string_a/cjo68ij

Would it make sense to adopt this change in the language standard?

Marlow when he made the change to GHC noted he was considering bringing it
up to Haskell' but here we are 6 years later.

-Edward Kmett
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal - Foreign enum support

2014-04-19 Thread Edward Kmett
-1 from me.

Your first example even provides a counter-example.

typedef enum {
 IMG_INIT_JPG = 0x0001,
 IMG_INIT_PNG = 0x0002,
 IMG_INIT_TIF = 0x0004,
 IMG_INIT_WEBP = 0x0008
 } IMG_InitFlags;


Those are defined as powers of two because they are a bit mask you have to
be able to (.|.) together.

This is the sort of thing people write .hsc files for, so they can include
the appropriate header directly and resolve the constants.

Maintaining a separate copy of an enum that goes out of date with the C
version is a recipe for breaking on future versions of the dependency, and
in my experience the majority of cases where the range is discontinuous
arise from when the thing in question is a mask, like this very case.

The remaining cases where you really want to incur all those obligations
are few enough and far enough between that going through a quasiquoter
seems to be the right solution.

-Edward


On Thu, Apr 17, 2014 at 10:19 AM, Merijn Verstraaten mer...@inconsistent.nl
 wrote:

 Cross-post to haskell-prime in case there's any interest for including
 this into the report's FFI specification.

 Proposal - Foreign enum support
 ===

 At the moment the FFI does not have a convenient way with interacting enums
 (whether proper enums or CPP defines) in C (like languages). Both enums
 and CPP
 defined enums are major parts of large C APIs and they are thus crucial to
 writing foreign bindings. A few examples:

 SDL_image defines the following enum:

 typedef enum {
 IMG_INIT_JPG = 0x0001,
 IMG_INIT_PNG = 0x0002,
 IMG_INIT_TIF = 0x0004,
 IMG_INIT_WEBP = 0x0008
 } IMG_InitFlags;

 OpenCL specifies the following typedefs + CPP defined enum:

 typedef uint32_t  cl_uint __attribute__((aligned(4)));
 typedef cl_uint   cl_platform_info;

 /* cl_platform_info */
 #define CL_PLATFORM_PROFILE 0x0900
 #define CL_PLATFORM_VERSION 0x0901
 #define CL_PLATFORM_NAME0x0902
 #define CL_PLATFORM_VENDOR  0x0903
 #define CL_PLATFORM_EXTENSIONS  0x0904

 OpenCL functions will return the above CPP defines as return values of type
 cl_platform_info.

 Current Solutions
 -

 In many cases someone wrapping such a C library would like to expose these
 enums as a simple sum type as this has several benefits: type safety, the
 ability to use haskell constructors for pattern matching, exhaustiveness
 checks.

 Currently the GHC FFI, as specified by Haskell2010, only marshalls a small
 set
 of foreign types and newtypes with exposed constructors of these types. As
 such
 there seem two approaches to wrap these enums:

  1. Implement an ADT representing the enum and write a manual conversion
 function between the ADT and the corresponding C type (e.g. CInt -
 Foo and
 Foo - CInt).

  2. Use a tool like c2hs to automatically generate the ADT and conversion
 function.

 In both cases the foreign functions are imported using the corresponding C
 type
 in their signature (reducing type safety) and the user is forced write
 trivial
 wrappers for every imported function to convert the ADT to the relevant C
 type
 and back.

 This is both tedious to write and costly in terms of code produced, in
 case of
 c2hs one calls toEnum . fromIntegral and fromIntegral . fromEnum for
 every
 argument/result even though this could trivially be a no-op.

 Worse, since c2hs uses the Enum class for it's conversion to/from C types
 it
 generates Enum instances like:

 instance Enum Foo where
 fromEnum Bar = 1
 fromEnum Baz = 1337

 toEnum 1 = Bar
 toEnum 1337 = Baz
 toEnum unmatched = error (PlatformInfo.toEnum: Cannot match  ++
 show unmatched)

 Since succ/pred and enumFromTo's default implementations assume enums
 convert
 to continuous sequence of Int this means the default generated enum
 instances
 crash. This problem could be overcome by making c2hs' code generation
 smarter,
 but this does not eliminate the tediousness of wrapping all foreign
 imported
 functions with marshalling wrappers, NOR does it eliminate the overhead of
 all
 this useless marshalling.

 Proposal
 

 Add a new foreign construct for enums, the syntax I propose below is rather
 ugly and ambiguous and thereforeopen to bikeshedding, but I prefer
 explaining
 based on a concrete example.

 foreign enum CInt as Foo where
 Bar = 1
 Baz
 Quux = 1337
 Xyzzy = _

 This would introduce a new type 'Foo' with semantics approximately
 equivalent
 too newtype Foo = Foo CInt plus the pattern synonyms pattern Bar = Foo
 1;
 pattern Baz = 2; pattern Quux = 1337; pattern Xyzzy = Foo _.

 Explicit listing of the value corresponding to a constructor should be
 optional, missing values should just 

Re: PROPOSAL: Literate haskell and module file names

2014-03-17 Thread Edward Kmett
Foo+rst.lhs does nicely dodge the collision with jhc.

How does ghc do the search now? By trying each alternative in turn?




On Sun, Mar 16, 2014 at 1:14 PM, Merijn Verstraaten
mer...@inconsistent.nlwrote:

 I agree that this could collide, see my beginning remark that I believe
 that the report should provide a minimal specification how to map modules
 to filenames and vice versa.

 Anyhoo, I'm not married to this specific suggestion. Carter suggested
 Foo+rst.lhs on IRC, other options would be Foo.rst+lhs or
 Foo.lhs+rst, I don't particularly care what as long as we pick something.
 Patching tools to support whatever solution we pick should be trivial.

 Cheers,
 Merijn

 On Mar 16, 2014, at 16:41 , Edward Kmett wrote:

 One problem with Foo.*.hs or even Foo.md.hs mapping to the module name Foois 
 that as I recall JHC will look for
 Data.Vector in Data.Vector.hs as well as Data/Vector.hs

 This means that on a case insensitive file system Foo.MD.hs matches both
 conventions.

 Do I want to block an change to GHC because of an incompatible change in
 another compiler? Not sure, but I at least want to raise the issue so it
 can be discussed.

 Another small issue is that this means you need to actually scan the
 directory rather than look for particular file names, but off my head
 really I don't expect directories to be full enough for that to be a
 performance problem.

 -Edward



 On Sun, Mar 16, 2014 at 8:56 AM, Merijn Verstraaten 
 mer...@inconsistent.nl wrote:

 Ola!

 I didn't know what the most appropriate venue for this proposal was so I
 crossposted to haskell-prime and glasgow-haskell-users, if this isn't the
 right venue I welcome advice where to take this proposal.

 Currently the report does not specify the mapping between filenames and
 module names (this is an issue in itself, it essentially makes writing
 haskell code that's interoperable between compilers impossible, as you
 can't know what directory layout each compiler expects). I believe that a
 minimal specification *should* go into the report (hence, haskell-prime).
 However, this is a separate issue from this proposal, so please start a new
 thread rather than sidetracking this one :)

 The report only mentions that by convention .hs extensions imply normal
 haskell and .lhs literate haskell (Section 10.4). In the absence of
 guidance from the report GHC's convention of mapping module Foo.Bar.Baz to
 Foo/Bar/Baz.hs or Foo/Bar/Baz.lhs seems the only sort of standard that
 exists. In general this standard is nice enough, but the mapping of
 literate haskell is a bit inconvenient, it leaves it completelyl ambiguous
 what the non-haskell content of said file is, which is annoying for tool
 authors.

 Pandoc has adopted the policy of checking for further file extensions for
 literate haskell source, e.g. Foo.rst.lhs and Foo.md.lhs. Here .rst.lhs
 gets interpreted as being reStructured Text with literate haskell and
 .md.lhs is Markdown with literate haskell. Unfortunately GHC currently maps
 filenames like this to the module names Foo.rst and Foo.md, breaking
 anything that wants to import the module Foo.

 I would like to propose allowing an optional extra extension in the
 pandoc style for literate haskell files, mapping Foo.rst.lhs to module name
 Foo. This is a backwards compatible change as there is no way for
 Foo.rst.lhs to be a valid module in the current GHC convention. Foo.rst.lhs
 would map to module name Foo.rst but module name Foo.rst maps to
 filename Foo/rst.hs which is not a valid haskell module anyway as the rst
 is lowercase and module names have to start with an uppercase letter.

 Pros:
  - Tool authors can more easily determine non-haskell content of literate
 haskell files
  - Currently valid module names will not break
  - Report doesn't specify behaviour, so GHC can do whatever it likes

 Cons:
  - Someone has to implement it
  - ??

 Discussion: 4 weeks

 Cheers,
 Merijn


 ___
 Glasgow-haskell-users mailing list
 glasgow-haskell-us...@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users




___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: PROPOSAL: Literate haskell and module file names

2014-03-16 Thread Edward Kmett
One problem with Foo.*.hs or even Foo.md.hs mapping to the module name
Foois that as I recall JHC will look for
Data.Vector in Data.Vector.hs as well as Data/Vector.hs

This means that on a case insensitive file system Foo.MD.hs matches both
conventions.

Do I want to block an change to GHC because of an incompatible change in
another compiler? Not sure, but I at least want to raise the issue so it
can be discussed.

Another small issue is that this means you need to actually scan the
directory rather than look for particular file names, but off my head
really I don't expect directories to be full enough for that to be a
performance problem.

-Edward



On Sun, Mar 16, 2014 at 8:56 AM, Merijn Verstraaten
mer...@inconsistent.nlwrote:

 Ola!

 I didn't know what the most appropriate venue for this proposal was so I
 crossposted to haskell-prime and glasgow-haskell-users, if this isn't the
 right venue I welcome advice where to take this proposal.

 Currently the report does not specify the mapping between filenames and
 module names (this is an issue in itself, it essentially makes writing
 haskell code that's interoperable between compilers impossible, as you
 can't know what directory layout each compiler expects). I believe that a
 minimal specification *should* go into the report (hence, haskell-prime).
 However, this is a separate issue from this proposal, so please start a new
 thread rather than sidetracking this one :)

 The report only mentions that by convention .hs extensions imply normal
 haskell and .lhs literate haskell (Section 10.4). In the absence of
 guidance from the report GHC's convention of mapping module Foo.Bar.Baz to
 Foo/Bar/Baz.hs or Foo/Bar/Baz.lhs seems the only sort of standard that
 exists. In general this standard is nice enough, but the mapping of
 literate haskell is a bit inconvenient, it leaves it completelyl ambiguous
 what the non-haskell content of said file is, which is annoying for tool
 authors.

 Pandoc has adopted the policy of checking for further file extensions for
 literate haskell source, e.g. Foo.rst.lhs and Foo.md.lhs. Here .rst.lhs
 gets interpreted as being reStructured Text with literate haskell and
 .md.lhs is Markdown with literate haskell. Unfortunately GHC currently maps
 filenames like this to the module names Foo.rst and Foo.md, breaking
 anything that wants to import the module Foo.

 I would like to propose allowing an optional extra extension in the pandoc
 style for literate haskell files, mapping Foo.rst.lhs to module name Foo.
 This is a backwards compatible change as there is no way for Foo.rst.lhs to
 be a valid module in the current GHC convention. Foo.rst.lhs would map to
 module name Foo.rst but module name Foo.rst maps to filename
 Foo/rst.hs which is not a valid haskell module anyway as the rst is
 lowercase and module names have to start with an uppercase letter.

 Pros:
  - Tool authors can more easily determine non-haskell content of literate
 haskell files
  - Currently valid module names will not break
  - Report doesn't specify behaviour, so GHC can do whatever it likes

 Cons:
  - Someone has to implement it
  - ??

 Discussion: 4 weeks

 Cheers,
 Merijn


 ___
 Glasgow-haskell-users mailing list
 glasgow-haskell-us...@haskell.org
 http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: NoImplicitPreludeImport

2013-05-28 Thread Edward Kmett
I'm definitely in favor of having the *option* to shut off the import of
the Prelude without entangling the notion of overriding all of the
desugarings.

I do, however, feel that removing the Prelude from base is a rather strong
step, which hasn't seen much support.


On Tue, May 28, 2013 at 11:23 AM, Ian Lynagh i...@well-typed.com wrote:


 Dear Haskellers,

 I have made a wiki page describing a new proposal,
 NoImplicitPreludeImport, which I intend to propose for Haskell 2014:

 http://hackage.haskell.org/trac/haskell-prime/wiki/NoImplicitPreludeImport

 What do you think?


 Thanks to the folks on #ghc who gave some comments on an earlier draft.


 Thanks
 Ian
 --
 Ian Lynagh, Haskell Consultant
 Well-Typed LLP, http://www.well-typed.com/


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: NoImplicitPreludeImport

2013-05-28 Thread Edward Kmett
Yes, but that leaves the an objection I have to this proposal requiring me
personally to write an extra 1691 import lines -- based on the contents of
my local development folder, to just get back up to par with the status quo.

As a flag I can turn on in a package to get finer grained control? I'm
totally for it!

As a source of *mandatory* boilerplate at the head of each module? It
doesn't strike me as a good trade-off.


On Tue, May 28, 2013 at 11:52 AM, Ian Lynagh i...@well-typed.com wrote:

 On Tue, May 28, 2013 at 11:41:44AM -0400, Edward Kmett wrote:
  I'm definitely in favor of having the *option* to shut off the import of
  the Prelude without entangling the notion of overriding all of the
  desugarings.
 
  I do, however, feel that removing the Prelude from base is a rather
 strong
  step, which hasn't seen much support.

 Just to clarify: This proposal is to stop importing the module
 implicitly, not to actually remove the module.


 Thanks
 Ian
 --
 Ian Lynagh, Haskell Consultant
 Well-Typed LLP, http://www.well-typed.com/

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: proposal for trailing comma and semicolon

2013-05-17 Thread Edward Kmett
My main concern is its a really weird corner case for the grammar to
remember for tuple sections and it does have very weird grammar
specification issues.

I really have no objection to it for the other cases. It'd make export
lists cleaner, maybe a few other cases, but how often can you really say
you can meaningfully comment out one field of a tuple have have the
surrounding code make any sense?

-Edward


On Fri, May 17, 2013 at 12:35 PM, Bardur Arantsson s...@scientician.netwrote:

 On 05/17/2013 06:32 PM, Johan Tibell wrote:
  On Fri, May 17, 2013 at 9:17 AM, Garrett Mitchener 
  garrett.mitche...@gmail.com wrote:
 
  Anyway, this is a paper cut in the language that has been bugging me
 for
  a while, and since there's now a call for suggestions for Haskell 2014,
 I
  thought I'd ask about it.
 
 
  I've also thought about this issue and I agree with Garrett, allowing
 that
  trailing comma (or semicolon) would help readability*. If it doesn't work
  with tuples, perhaps we could at least do it with lists and records?
 

 Multiline tuples don't seem all that common, so +1 on that from me.



 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: relaxing instance declarations

2013-04-30 Thread Edward Kmett
The problem I see with this is it becomes very brittle to just silently
accept bad class member names.

Before, if I had a method

class Foo a where
  bar :: a - Int
  bar _ = 0

and I went to implement something with a typo

instance Foo Int where
  baz = id

then I'd get an error, but your proposal it'd just silently be accepted,
and lead to to long nights searching for why my instance wasn't doing what
I expected long after I'd shipped my product.

This kind of class isn't an academic concern, many real world classes are
defined with 2-3 members circularly. Consider Foldable or Traversable.

newtype Baz = Baz a

instance Foldable Baz where
  foldmap f (Baz a) = f a

is an innocent typo that would then just mean that foldMap when applied to
Baz will silently loop forever with no warning to the programmer.
Traversable Baz behaves similarly with the cyclic between sequenceA+fmap
and traverse.

I'd argue that I've made typos on member names *far* more often than I've
wanted this feature.

In fact, I've actually tried to live with a very similar lexical scoping
design in a toy language of mine I called Kata. In Kata, you could just
introduce new members by writing them in a 'public' block, and constrain
subclasses by putting in members without definitions, but it was
sufficiently brittle that I wound up adding another syntactic construct
which could only be used to instantiate definitions not make new names.
That resolve the issue and made the language much nicer to play around in.

I'd really rather not switch from a design that is robust and usually does
the right thing to one that is more brittle and prone to introducing hard
to find bugs. =(

-Edward



On Tue, Apr 30, 2013 at 7:05 PM, Doug McIlroy d...@cs.dartmouth.edu wrote:

 Max's idea (see below) of a second where clause is cute, but
 not sanctioned by Haskell syntax.

 Iavor wrote, It would be quite arbitrary to restrict this only
 to instances.

 Actually what I have in mind is to make the language MORE
 consistent, by eliminating distinctions between instance-wheres
 and ordinary declaration-wheres.  Currently instance-wheres may
 only declare class methods, while declaration-wheres may declare
 variables at will.  Also instance-wheres may not declare type
 signatures, while declaration-wheres may.  I propose dropping
 these restrictions on instance-wheres.

 Hazard: Adding a method to an existing class could accidentally
 capture a name that was previously local to an instance-where.
 Capture can be prevented by declaring type signatures for local
 variables.  The compiler might warn when such defensive
 declarations are lacking.

 Doug

 On Mon, 29 Apr 2013 15:56 Iavor Diatchki iavor.diatc...@gmail.com wrote

 Hello,

 I think that if we want something along those lines, we should consider a
 more general construct that allows declarations to scope over other
 declarations (like SML's `local` construct).  It would be quite arbitrary
 to restrict this only to instances.

 -Iavor



 On Mon, Apr 29, 2013 at 2:41 PM, Max Bolingbroke 
 batterseapo...@hotmail.com
  wrote:

  You could probably get away with just using two where clauses:
 
  instance Foo a where
  bar = ...
where
  auxilliary = ...
 
 
 
 
  On 28 April 2013 18:42, Edward Kmett ekm...@gmail.com wrote:
 
  Makes sense. I'm not sure what a good syntactic story would be for that
  feature though. Just writing down member names that aren't in the class
  seems to be too brittle and error prone, and new keywords seems uglier
 than
  the current situation.
 
  Sent from my iPad
 
  On Apr 28, 2013, at 1:24 PM, Doug McIlroy d...@cs.dartmouth.edu
 wrote:
 
   Not always. For example, you can't mess with the declaration
   of a standard class, such as Num.
  
   On Sun, Apr 28, 2013 at 12:06 PM, Edward Kmett ekm...@gmail.com
  wrote:
  
   You can always put those helper functions in the class and then just
  not
   export them from the module.
  
   On Sun, Apr 28, 2013 at 10:49 AM, Doug McIlroy d...@cs.dartmouth.edu
  wrote:
  
   Is there any strong reason why the where clause in an instance
   declaration cannot declare anything other than class
   operators? If not, I suggest relaxing the restriction.
  
   It is not unusual for declarations of class operators to
   refer to special auxiliary functions. Under current rules
   such functions have to be declared outside the scope in
   which they are used.
  
   Doug McIlroy
 
  ___
  Haskell-prime mailing list
  Haskell-prime@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-prime
 
 
 
  ___
  Haskell-prime mailing list
  Haskell-prime@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-prime
 
 

 Content-Type: text/html; charset=UTF-8

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Is it time to start deprecating FunDeps?

2013-04-30 Thread Edward Kmett
It seems to be my day to be the villain crying out against everything
someone proposes. ;)

I for one would be strongly against a proposal that started actively
deprecating the use of functional dependencies in user code.

There are a number of situations where the sheer amount of code you're
forced to write by the mechanical translation to equalities and type
families is explosive. I have dozens of classes of forms like

class Wrapped s t a b | a - s, b - t, a t - s, b s - t

that become frankly nigh unusable after translation -- other examples are
much worse. Similarly, in my experience, code written with monads-tf is
almost always about 15% more verbose than code written with the mtl. It
costs me two characters to put a space and an 's' in for MonadState s m,
but I'm usually not talking about the class unless I also reference the
state type in the signature later, and there it saves me several characters
per occurrence. When you have a more complicated functional dependency set
like the above, though the difference isn't just a small constant amount of
overhead.

It isn't that I use these things out of ignorance. I use them deliberately
when the alternative isn't palatable.

I'm all for internally desugaring everything into the same form internally,
but Haskell as a culture has long embraced surface complexity so long as it
can desugar to something internally reasonable: Consider, let vs. where,
multiple definitions vs. case, layout vs. {}'s, etc.

This is despite their varying surface complexities. Let is an expression
and where being tied to the binding over statements, multiple bindings
permitting you to shuffle backtrack pattern matches and guards across
several arguments, etc.

Even changing out the implementation strategy for FDs in GHC has not been
without pain, as it has resulted in changing the semantics of a number of
corner cases. In fact, one could argue a large part of the stall in there
being any progress in standardizing something in the TF vs FD vs both arena
has been that it is very much a moving target. The document we would write
today bears very little resemblance to the document we'd have written 5
years ago. To standardize something you need to know what it means. If we'd
standardized FDs 5 years ago, we'd be sitting here with an increasingly
irrelevant language standard today or stuck in our tracks, and with very
little practical impact on the community to show for it, and we'd be likely
rewriting the whole document now.

It takes a lot of effort to write up something like either TFs or FDs in
their full generality, and the resulting document if based on just
transcoding the state of GHC documents an implementation, not necessarily
the most reasonable commonly agreeable semantics.

Arguably haskell-prime's role should be to follow far enough behind that
the right decisions can be reached by implementations that blaze ahead of
it.

The only Haskell compiler even with type equalities is GHC at this point.
They aren't a small change to introduce to a compiler. I for one work on a
rather Haskelly compiler for my day job, when I'm not crying gloom and doom
on mailing lists, and have been actively considering how to go about it for
almost two years now without losing the other properties that make that
compiler special. I'd really be a lot more comfortable having seen more
success stories of people converting Haskell compilers over to this kind of
approach before I felt it wise to say that this way that we know works and
which has been productively in use in real code for ten years should be
deprecated in favor of a way that only works with the single compiler.

A large part of the pain of choosing between FDs and TFs is that both
have different somewhat overlapping strengths. In the end I don't think we
will wind up choosing between them, we'll just desugar both to a common
core. Hopefully after we have more than one point in the design space to
choose from.

Removing a language feature just because another one can duplicate it with
explosively more verbose code doesn't strike me as a very Haskelly way
forward.

-Edward



On Tue, Apr 30, 2013 at 1:31 AM, AntC anthony_clay...@clear.net.nz wrote:


 Now that the Type Equality coercions extension is stable and has been
 blessed by SPJ as a functional-dependency-like mechanism (but using
 equalities) for the result type [in
 http://hackage.haskell.org/trac/ghc/wiki/Records/OverloadedRecordFields#Hig
 herranktypesandtypefunctions],
 no code is compelled to use FunDeps.

 [Note that this is orthogonal to the issue of overlaps. You can use the
 equalities mechanism happily with overlaps, see example below.
  This is also orthogonal to type functions/families/associated types. You
 can use the equalities mechanism happily without type families.]

 There's plenty of code using FunDeps, so we couldn't just withdraw the
 extension. But we could start deprecating it.

 Better still, given that there is a mechanical way to convert FunDeps to
 

Re: relaxing instance declarations

2013-04-28 Thread Edward Kmett
You can always put those helper functions in the class and then just not
export them from the module.


On Sun, Apr 28, 2013 at 10:49 AM, Doug McIlroy d...@cs.dartmouth.eduwrote:

 Is there any strong reason why the where clause in an instance
 declaration cannot declare anything other than class
 operators? If not, I suggest relaxing the restriction.

 It is not unusual for declarations of class operators to
 refer to special auxiliary functions. Under current rules
 such functions have to be declared outside the scope in
 which they are used.

 Doug McIlroy

 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: relaxing instance declarations

2013-04-28 Thread Edward Kmett
Makes sense. I'm not sure what a good syntactic story would be for that feature 
though. Just writing down member names that aren't in the class seems to be too 
brittle and error prone, and new keywords seems uglier than the current 
situation.

Sent from my iPad

On Apr 28, 2013, at 1:24 PM, Doug McIlroy d...@cs.dartmouth.edu wrote:

 Not always. For example, you can't mess with the declaration
 of a standard class, such as Num.
 
 On Sun, Apr 28, 2013 at 12:06 PM, Edward Kmett ekm...@gmail.com wrote:
 
 You can always put those helper functions in the class and then just not
 export them from the module.
 
 On Sun, Apr 28, 2013 at 10:49 AM, Doug McIlroy d...@cs.dartmouth.eduwrote:
 
 Is there any strong reason why the where clause in an instance
 declaration cannot declare anything other than class
 operators? If not, I suggest relaxing the restriction.
 
 It is not unusual for declarations of class operators to
 refer to special auxiliary functions. Under current rules
 such functions have to be declared outside the scope in
 which they are used.
 
 Doug McIlroy

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns

2013-02-05 Thread Edward Kmett
On the topic of liberalizing operators that are currently only used in
patterns, another one that would be amazing to have as a valid term (or
type operator) is @ using similar () tricks. 1 character operator names are
in dreadful short supply and really help make nice DSLs.

-Edward

On Tue, Feb 5, 2013 at 8:42 AM, Ian Lynagh i...@well-typed.com wrote:

 On Mon, Feb 04, 2013 at 07:26:16PM -0500, Edward Kmett wrote:
  If space sensitivity or () disambiguation is being used on !, could one
 of
  these also be permitted on ~ to permit it as a valid infix term-level
  operator?

 I don't think there's any reason ~ couldn't be an operator, defined with
 the
 (~) x y = ...
 syntax.

 Allowing it to be defined with infix syntax would be a little trickier.


 Hmm, I've just realised that if we decide to make !_ and !foo lexemes,
 then we'd also want !(+) to be a lexeme, which presumably means we'd
 want (+) to be a single lexeme too (and also `foo`, for consistency).
 But I don't think making that change would be problematic.


 Thanks
 Ian


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Bang patterns

2013-02-04 Thread Edward Kmett
If space sensitivity or () disambiguation is being used on !, could one of
these also be permitted on ~ to permit it as a valid infix term-level
operator?

That would be an amazingly valuable symbol to be able to reclaim for the
term level for equivalences, and for folks who come from other languages
where it is used like liftA2 (,) in parsing libraries, etc.

-Edward

On Mon, Feb 4, 2013 at 6:42 PM, Ian Lynagh i...@well-typed.com wrote:

 On Mon, Feb 04, 2013 at 10:37:44PM +, Simon Peyton-Jones wrote:
 
  I don't have a strong opinion about whether
f ! x y ! z = e
  should mean the same; ie whether the space is significant.   I think
 it's probably more confusing if the space is significant (so its presence
 or absence makes a difference).

 I also don't feel strongly, although I lean the other way:

 I don't think anyone writes f ! x when they mean f with a strict
 argument x, and I don't see any particular advantage in allowing it.
 In fact, I think writing that is less clear than f !x, so there is an
 advantage in disallowing it.

 It also means that existing code that defines a (!) operator in infix
 style would continue to work, provided it puts whitespace around the !.


 Thanks
 Ian


 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Status of Haskell'?

2012-11-27 Thread Edward Kmett
I think it has proven out pretty well in practice that probably want both
in the surface language. I know minimalists on the TF side of the debate
have tried to make the case that you don't even need FDs in the surface
syntax, but there are lots of places where it having a class with multiple
directional constraints makes the code much, much more sane.

I would be loathe to sacrifice either of them.

-Edward

On Tue, Nov 27, 2012 at 12:11 PM, Brandon Allbery allber...@gmail.comwrote:

 uestion yet?  I thought MPTC was not considered usable without one of
 those, and neither is yet considered standard (with some good reason in the
 case of FDs).

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: In opposition of Functor as super-class of Monad

2012-10-25 Thread Edward Kmett
Tony, I think you misparsed the proposal. 

The ...'s were for specific monads indicating the additional work required for 
each Monad.

I think the only real proposal on the table is the obvious one of adding 
Applicative as a superclass of monad.

From there there are a couple of incremental improvements that could be made 
like adding the unimplemented superclass defaults or adding the equivalent of 
DefaultSignatures to the language spec to reduce the burden on Monad 
implementors.

In practice I think either of those extensions would be premature to add to the 
language specification at this time.

I would be 100% behind adding the Applicative constraint as a superclass of 
Monad and even perhaps of some bikeshedding, like exporting Applicative from 
the Prelude, because otherwise you can't define a Monad without an import, 
while you can now.

I would be strongly against requiring superclass defaults or DefaultSignatures 
in the haskell standard, however. The former is a largely untested point in the 
design space and the latter has issues where it tightly couples classes with 
their dependencies, leading to unbreakable cycles between classes that all have 
to be defined together and poor engineering practices.

Best,
--Edward


On Oct 25, 2012, at 5:46 PM, Tony Morris tonymor...@gmail.com wrote:

 I should have been clearer sorry. I should hope not that Functor -
 Applicative - Monad.
 
 Perhaps I do not understand the purpose of this thread, but fixing the
 hierarchy in this way is a mistake of similar magnitude to the original
 position -- one that I would cringe at seeing repeated. That is why I
 thought such a discussion was on-topic.
 
 
 On 25/10/12 10:12, Ben Franksen wrote:
 Tony Morris wrote:
 I should hope not. The identity element (return, coreturn, mempty, pure,
 Category.id) is almost never needed.
 
 * http://hackage.haskell.org/package/semigroupoids
 * https://gist.github.com/3871764
 Off-topic. Feel free to start a new thread named The bombastic one-and-true 
 class hierarchy I always wanted to have. These proposals have their merits, 
 and I greatly respect the category theoretic knowledge that went into them 
 -- but this is another discussion. This thread refers to a rather modest 
 correction in the standard libraries, not a complete re-design. The idea is 
 to fix something that is widely accepted as an unfortunate ommision (in 
 fact, Oleg's comment is one of the very few that question the idea of adding 
 super class constraints to Monad in principle).
 
 BTW, it is unclear what your I hope not refers to, since in both of the 
 hierarchies you linked to Applicative *is* a super class of Monad.
 
 Cheers
 
 On 25/10/12 04:49, Ben Franksen wrote:
 First, let me make it clear that nowadays we are of course (I hope!)
 talking 
 about making not only Functor, but Applicative a super-class of Monad (so 
 Functor becomes a super-class by transitivity).
 
 Petr P wrote:
 The main objections were that it would break existing code and that it
 would lead to code duplication. The former is serious, [...]
 
 To address the first objection:
 I don't buy this it breaks lots of code argument. Adding the missing 
 instances is a complete no-brainer; as you wrote:
 
 instance Applicative ... where
pure   = return
(*)  = ap
 instance Functor ... where
fmap   = liftM
 I do not think it is unreasonable to expect people to add such a simple
 and 
 practically automatic fix to some old programs in the interest of
 cleaning 
 up an old wart (and almost everyone agrees that this would be a good
 thing, 
 in principle).
 
 BTW, I guess most programs already contain the Functor instances (but
 maybe 
 not Applicative, as it is newer).
 
 I agree with Petr Pudlak that code duplication is not an issue, see
 above. 
 And yes, these automatic instances may have stronger super-class 
 constraints than strictly necessary. So what? The program didn't need the 
 Functor (or Applicative) instance anyway (or it already would have
 defined 
 it, in which case no change would be needed at all).
 
 Default superclass instances strike me as a complicated proposal for 
 solving trivial problems. The switch in Control.Exception (from data 
 Exception to class Exception) was much more disrupting, adapting programs 
 meant lots of changes everywhere exceptions are handled, not just adding 
 some trivial instances. Still people managed the transition.
 
 Cheers
 
 
 -- 
 Tony Morris
 http://tmorris.net/
 
 
 
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: Proposal: Scoping rule change

2012-07-24 Thread Edward Kmett
+1 from me

I can't count the number of times I've had this bite me when writing 
ByteString-like APIs that pun names from the Prelude.

On Jul 23, 2012, at 8:28 PM, Lennart Augustsson lenn...@augustsson.net wrote:

 It's not often that one gets the chance to change something as
 fundamental as the scoping rules of a language.  Nevertheless, I would
 like to propose a change to Haskell's scoping rules.
 
 The change is quite simple.  As it is, top level entities in a module
 are in the same scope as all imported entities.  I suggest that this
 is changed to that the entities from the module are in an inner scope
 and do not clash with imported identifiers.
 
 Why?  Consider the following snippet
 
 module M where
 import I
 foo = True
 
 Assume this compiles.  Now change the module I so it exports something
 called foo.  After this change the module M no longer compiles since
 (under the current scoping rules) the imported foo clashes with the
 foo in M.
 
 Pros: Module compilation becomes more robust under library changes.
 Fewer imports with hiding are necessary.
 
 Cons: There's the chance that you happen to define a module identifier
 with the same name as something imported.  This will typically lead to
 a type error, but there is a remote chance it could have the same
 type.
 
 Implementation status: The Mu compiler has used the scoping rule for
 several years now and it works very well in practice.
 
   -- Lennart
 
 ___
 Haskell-prime mailing list
 Haskell-prime@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-prime

___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: String != [Char]

2012-03-25 Thread Edward Kmett
On Sun, Mar 25, 2012 at 11:42 AM, Gabriel Dos Reis 
g...@integrable-solutions.net wrote:

 Perhaps we are underestimating their competences  and are
 complicating their lives unnecessarily...


Have you ever actually taught an introductory languages course?

If anything we delude ourselves by overestimating the ability of kids just
shortly out of highschool to assimilate an entire new worldview in a couple
of weeks while they are distracted by other things. Any additional
distraction that makes this harder is a serious pain point.

Consequently, in my experience, most instructors don't even go outside of
the Prelude, except perhaps to introduce simple custom data types that
their students define. The goal in that period is to get the students
accustomed to non-strictness, do some list processing, and hope that an
understanding of well-founded recursion vs. productive corecursion sticks,
because these are the things that you can't teach well in another language
and which are useful to the student no matter what tools they wind up using
in the future.

I would rather extra time be spent trying to get the users up to speed on
the really interesting and novel parts of the language, such as typeclasses
and monads in particular, than lose at least a quarter of my time fiddling
about with text processing, a special case API and qualified imports,
because those couple of weeks are going to shape many of those students'
opinion of the language forever.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: String != [Char]

2012-03-23 Thread Edward Kmett
Like I said, my objection to including Text is a lot less strong than my 
feelings on any notion of deprecating String.

However, I still see a potentially huge downside from an pedagogical 
perspective to pushing Text, especially into a place where it will be front and 
center to new users. String lets the user learn about induction, and encourages 
a Haskelly programming style, where you aren't mucking about with indices and 
Builders everywhere, which is frankly very difficult to use when building Text. 
If you cons or append to build up a Text fragment, frankly you're doing it 
wrong.

The pedagogical concern is quite real, remember many introductory lanuage 
classes have time to present Haskell and the list data type and not much else. 
Showing parsing through pattern matching on strings makes a very powerful tool, 
its harder to show that with Text.

But even when taking apart Text, the choice of UTF16 internally makes it pretty 
much a worst case for many string manipulation purposes. (e.g. slicing has to 
spend linear time scanning the string) due to the existence of codepoints 
outside of plane 0. 

The major benefits of Text come from FFI opportunities, but even there if you 
dig into its internals it has to copy out of the array to talk to foreign 
functions because it lives in unpinned memory unlike ByteString.

The workarounds for these  limitations all require access to the internals, so 
a Text proposed in an implementation-agnostic manner is less than useful, and 
one supplied with a rigid set of implementation choices seems to fossilize the 
current design.

All of these things make me lean towards a position that it is premature to 
push Text as the one true text representation.

That I am very sympathetic to the position that the standard should ensure that 
there are Text equivalents for all of the exposed string operations, like read, 
show, etc, and the various IO primitives, so that a user who is savvy to all of 
these concerns has everything he needs to make his code perform well.

Sent from my iPad

On Mar 23, 2012, at 1:32 PM, Brandon Allbery allber...@gmail.com wrote:

 On Fri, Mar 23, 2012 at 13:05, Edward Kmett ekm...@gmail.com wrote:
 Isn't it enough that it is part of the platform?
 
 As long as the entire Prelude and large chunks of the bootlibs are based 
 around String, String will be preferred.  String as a boxed singly-linked 
 list type is therefore a major problem.
 
 -- 
 brandon s allbery  allber...@gmail.com
 wandering unix systems administrator (available) (412) 475-9364 vm/sms
 
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: String != [Char]

2012-03-23 Thread Edward Kmett
On Fri, Mar 23, 2012 at 4:21 PM, Nate Soares n...@so8r.es wrote:

 Note that this might be a good time to consider re-factoring the list
 operations, for example, making ++ operate on monoids instead of just
 lists.


Note: we have () for Monoid, which was deliberately chosen rather than
generalizing (++) because that had already changed meaning from applying to
MonadPlus to the more restricted type during what I tend to refer to as the
great momomorphization revolution of 1998, and not every MonadPlus is
compatible with the corresponding monoid (Maybe comes to mind).

This also entails moving Data.Monoid into the standard.


 I think the 'naming issue' that you mention highlights the need for better
 use of type classes in the prelude.


The major issue with typeclasses for things like special-purpose containers
is that they almost inevitably wind up requiring multiparameter type
classes with functional dependencies, or they need type families. This then
runs afoul of the fact that since neither one is better than the other for
all usecases, neither one seems to be drifting any closer to
standardization.

-Edward
___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime