Re: Can strict ST break referential transparency?

2017-11-23 Thread Yuras Shumovich

So my theory is correct, but it is already fixed in 8.2. Nice!

Thank you for clarification!

Yuras.

23-11-2017, Чцв а 10:20 -0500, Ben Gamari напісаў:
> Yuras Shumovich <shumovi...@gmail.com> writes:
> 
> > Hello,
> > 
> 
> Hello,
> 
> Sorry for the late reply; this required a bit of reflection. The
> invariants surrounding the suspension of ST computations is a rather
> delicate and poorly documented area.
> 
> I believe the asynchronous exception case which you point out is
> precisely #13615. The solution there was, as David suggests, ensure
> that
> no resulting thunk could be entered more than once by a very strict
> blackholing protocol. Note that this isn't normal "eager" blackholing
> protocol, which still might allow multiple entrancy. It's rather a
> more
> strict variant, requiring two atomic operations.
> 
> I can't be certain that there aren't more cases like this, but I
> suspect
> not since most asynchronous suspensions where the resulting thunk
> might
> "leak" back into the program go through the raiseAsync codepath that
> was
> fixed in #13615.
> 
> Cheers,
> 
> - ben
> 
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can strict ST break referential transparency?

2017-11-23 Thread Yuras Shumovich
22-11-2017, Срд а 15:37 -0500, David Feuer напісаў:
> If there is indeed a problem, I suspect the right way to fix it is to
> make sure that no partially evaluated thunk is ever resumed twice.
> Inter-thread exceptions are presumably rare enough that we don't have
> to worry *too* much about their cost.

Sounds reasonable. But probably it already works that way? Though it
doesn't seem to be covered in the papers, and looks like nobody knows
an answer to my question. It makes me worry a bit.

On the other hand, right now resuming frozen thunk usually
crashes/hangs as discovered in #14497. Nobody noticed that so far in
the wild, so probably the subject is just a dark corner case, not worse
the efforts.

> 
> 
> David FeuerWell-Typed, LLP
>  Original message ----From: Yuras Shumovich <shumovichy@g
> mail.com> Date: 11/21/17  12:43 PM  (GMT-05:00) To: ghc-devs  s...@haskell.org> Subject: Can strict ST break referential
> transparency? 
> 
> Hello,
> 
> I was evaluating a possibility that linear types can break
> referential
> transparency [1], exactly like lazy ST [2].
> 
> But on the way I realized that even strict ST may suffer from the
> same
> issue. If ST computation is interrupted by e.g. async exception,
> runtime will "freeze" it at the point where it was interrupted [3].
> 
> So the question: is the "freezed" computation just a normal thunk?
> Note
> that the runtime doesn't guarantee that a thunk will be evaluated
> only
> once [4]. If the "freezed" thunk captures e.g. STRef, and will be
> evaluated twice, its effect could become observable from outside,
> just
> like in case of lazy ST.
> 
> I tried to check the theory by stress testing RTS. Unfortunately I
> immediately discovered a runtime crash [5], which is probably not
> related to my question.
> 
> Hope someone will be able to clarify things for me.
> 
> Thanks,
> Yuras.
> 
> [1] https://github.com/ghc-proposals/ghc-proposals/pull/91#issuecomme
> nt
> -345553071
> [2] https://ghc.haskell.org/trac/ghc/ticket/14497
> [3] See section 8 there: https://www.microsoft.com/en-us/research/wp-
> co
> ntent/uploads/2016/07/asynch-exns.pdf
> [4] https://www.microsoft.com/en-us/research/wp-content/uploads/2005/
> 09
> /2005-haskell.pdf
> [5] https://ghc.haskell.org/trac/ghc/ticket/14497
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Can strict ST break referential transparency?

2017-11-21 Thread Yuras Shumovich
Sorry, link [2] meant to be
https://ghc.haskell.org/trac/ghc/ticket/11760

21-11-2017, Аўт а 20:43 +0300, Yuras Shumovich напісаў:
> Hello,
> 
> I was evaluating a possibility that linear types can break
> referential
> transparency [1], exactly like lazy ST [2].
> 
> But on the way I realized that even strict ST may suffer from the
> same
> issue. If ST computation is interrupted by e.g. async exception,
> runtime will "freeze" it at the point where it was interrupted [3].
> 
> So the question: is the "freezed" computation just a normal thunk?
> Note
> that the runtime doesn't guarantee that a thunk will be evaluated
> only
> once [4]. If the "freezed" thunk captures e.g. STRef, and will be
> evaluated twice, its effect could become observable from outside,
> just
> like in case of lazy ST.
> 
> I tried to check the theory by stress testing RTS. Unfortunately I
> immediately discovered a runtime crash [5], which is probably not
> related to my question.
> 
> Hope someone will be able to clarify things for me.
> 
> Thanks,
> Yuras.
> 
> [1] https://github.com/ghc-proposals/ghc-proposals/pull/91#issuecomme
> nt
> -345553071
> [2] https://ghc.haskell.org/trac/ghc/ticket/14497
> [3] See section 8 there: https://www.microsoft.com/en-us/research/wp-
> co
> ntent/uploads/2016/07/asynch-exns.pdf
> [4] https://www.microsoft.com/en-us/research/wp-content/uploads/2005/
> 09
> /2005-haskell.pdf
> [5] https://ghc.haskell.org/trac/ghc/ticket/14497
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Can strict ST break referential transparency?

2017-11-21 Thread Yuras Shumovich

Hello,

I was evaluating a possibility that linear types can break referential
transparency [1], exactly like lazy ST [2].

But on the way I realized that even strict ST may suffer from the same
issue. If ST computation is interrupted by e.g. async exception,
runtime will "freeze" it at the point where it was interrupted [3].

So the question: is the "freezed" computation just a normal thunk? Note
that the runtime doesn't guarantee that a thunk will be evaluated only
once [4]. If the "freezed" thunk captures e.g. STRef, and will be
evaluated twice, its effect could become observable from outside, just
like in case of lazy ST.

I tried to check the theory by stress testing RTS. Unfortunately I
immediately discovered a runtime crash [5], which is probably not
related to my question.

Hope someone will be able to clarify things for me.

Thanks,
Yuras.

[1] https://github.com/ghc-proposals/ghc-proposals/pull/91#issuecomment
-345553071
[2] https://ghc.haskell.org/trac/ghc/ticket/14497
[3] See section 8 there: https://www.microsoft.com/en-us/research/wp-co
ntent/uploads/2016/07/asynch-exns.pdf
[4] https://www.microsoft.com/en-us/research/wp-content/uploads/2005/09
/2005-haskell.pdf
[5] https://ghc.haskell.org/trac/ghc/ticket/14497
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Allow top-level shadowing for imported names?

2016-10-04 Thread Yuras Shumovich
On Tue, 2016-10-04 at 04:48 -0400, Edward Kmett wrote:

> It makes additions of names to libraries far less brittle. You can
> add a
> new export with a mere minor version bump, and many of the situations
> where
> that causes breakage can be fixed by this simple rule change.

It would be true only if we also allow imports to shadow each other.
Otherwise there will be a big chance for name clash yet.

Can we generalize the proposal such that subsequent imports shadow
preceding ones? In that case you may e.g. list local modules after
libraries' modules, and be sure new identifies in libraries will not
clash with local ones. Obviously shadowing should be a warning still.

> 
> -Edward
> 
> On Mon, Oct 3, 2016 at 2:12 PM, Iavor Diatchki  com>
> wrote:
> 
> > 
> > Hi,
> > 
> > Lennart suggested that some time ago, here is the thread from the
> > last
> > time we discussed it:
> > 
> > https://mail.haskell.org/pipermail/haskell-prime/2012-July/003702.h
> > tml
> > 
> > I think it is a good plan!
> > 
> > -Iavor
> > 
> > 
> > 
> > On Mon, Oct 3, 2016 at 4:46 AM, Richard Eisenberg  > edu>
> > wrote:
> > 
> > > 
> > > By all means make the proposal -- I like this idea.
> > > 
> > > > 
> > > > On Oct 3, 2016, at 4:29 AM, Herbert Valerio Riedel  > > > ail.com>
> > > wrote:
> > > > 
> > > > 
> > > > Hi *,
> > > > 
> > > > I seem to recall this was already suggested in the past, but I
> > > > can't
> > > > seem to find it in the archives. For simplicity I'll restate
> > > > the idea:
> > > > 
> > > > 
> > > >    foo :: Int -> Int -> (Int,Int)
> > > >    foo x y = (bar x, bar y)
> > > >  where
> > > >    bar x = x+x
> > > > 
> > > > results merely in a name-shadowing warning (for -Wall):
> > > > 
> > > >    foo.hs:4:9: warning: [-Wname-shadowing]
> > > >    This binding for ‘x’ shadows the existing binding
> > > >  bound at foo.hs:2:5
> > > > 
> > > > 
> > > > However,
> > > > 
> > > >    import Data.Monoid
> > > > 
> > > >    (<>) :: String -> String -> String
> > > >    (<>) = (++)
> > > > 
> > > >    main :: IO ()
> > > >    main = putStrLn ("Hi" <> "There")
> > > > 
> > > > doesn't allow to shadow (<>), but rather complains about
> > > > ambiguity:
> > > > 
> > > >    bar.hs:7:23: error:
> > > >    Ambiguous occurrence ‘<>’
> > > >    It could refer to either ‘Data.Monoid.<>’,
> > > > imported from ‘Data.Monoid’ at
> > > bar.hs:1:1-18
> > > > 
> > > >  or ‘Main.<>’, defined at
> > > > bar.hs:4:1
> > > > 
> > > > 
> > > > This is of course in line with the Haskell Report, which says
> > > > in
> > > > https://www.haskell.org/onlinereport/haskell2010/haskellch5.
> > > html#x11-1010005.3
> > > > 
> > > > 
> > > > > 
> > > > > The entities exported by a module may be brought into scope
> > > > > in another
> > > > > module with an import declaration at the beginning of the
> > > > > module. The
> > > > > import declaration names the module to be imported and
> > > > > optionally
> > > > > specifies the entities to be imported. A single module may be
> > > > > imported
> > > > > by more than one import declaration. Imported names serve as
> > > > > top level
> > > > > declarations: they scope over the entire body of the module
> > > > > but may be
> > > > > shadowed by *local non-top-level bindings.*
> > > > 
> > > > 
> > > > However, why don't we allow this to be relaxed via a new
> > > > language
> > > > extensions, to allow top-level bindings to shadow imported
> > > > names (and
> > > > of course emit a warning)?
> > > > 
> > > > Unless I'm missing something, this would help to keep existing
> > > > and
> > > > working code compiling if new versions of libraries start
> > > > exporting new
> > > > symbols (which happen to clash with local top-level defs),
> > > > rather than
> > > > resulting in a fatal name-clash; and have no major downsides.
> > > > 
> > > > If this sounds like a good idea, I'll happily promote this into
> > > > a proper
> > > > proposal over at https://github.com/ghc-proposals/ghc-proposals
> > > > ; I
> > > > mostly wanted to get early feedback here (and possibly find out
> > > > if and
> > > > where this was proposed before), before investing more time
> > > > turning
> > > > this into a fully fledged GHC proposal.
> > > > 
> > > > Cheers,
> > > >  HVR
> > > > ___
> > > > ghc-devs mailing list
> > > > ghc-devs@haskell.org
> > > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > > 
> > > ___
> > > ghc-devs mailing list
> > > ghc-devs@haskell.org
> > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > > 
> > 
> > 
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > 
> > 
> ___
> 

Re: Proposal process status

2016-07-21 Thread Yuras Shumovich
On Thu, 2016-07-21 at 14:38 -0400, Richard Eisenberg wrote:
> > 
> > On Jul 21, 2016, at 2:25 PM, Yuras Shumovich <shumovi...@gmail.com>
> > wrote:
> > 
> > It is hopeless. Haskell2020 will not include TemplateHaskell,
> > GADTs,
> > etc.
> 
> Why do you say this? I don't think this is a forgone conclusion. I'd
> love to see these standardized.

Because I'm a pessimist :)
We even can't agree to add `text` to the standard library.

> 
> My own 2¢ on these are that we can standardize some subset of
> TemplateHaskell quite easily. GADTs are harder because (to my
> knowledge) no one has ever written a specification of type inference
> for GADTs. (Note that the OutsideIn paper admits to failing at this.)
> Perhaps we can nail it, but perhaps not. Even so, we can perhaps
> standardize much of the behavior around GADTs (but with pattern
> matches requiring lots of type annotations) and say that an
> implementation is free to do better. Maybe we can do even better than
> this, but I doubt we'll totally ignore this issue.
> 
> > Haskell Prime committee will never catch up if GHC will continue
> > adding new extensions.
> 
> Of course not. But I believe some libraries also refrain from using
> new extensions for precisely the same reason -- that the new
> extensions have yet to fully gel.

And you are an optimist. We are lazy, so we'll use whatever is
convenient. There are three ways to force people to refrain from using
new extensions:

- mature alternative compiler exists, so nobody will use your library
unless it uses only the common subset of features;

- the standard covers all usual needs (I don't think it will be
possible in near future, and existence of this email thread proves
that.)

- new features are not first class citizens; e.g. `cabal check` issues
an error (or warning) when you are uploading a package with immature
extension used.

> 
> > In 2020 everybody will use pattern synonyms,
> > overloaded record fields and TypeInType, so the standard will be as
> > far
> > from practice as it is now.
> 
> Pattern synonyms, now with a published paper behind them, may
> actually be in good enough shape to standardize by 2020. I don't know
> anything about overloaded record fields. I'd be shocked if TypeInType
> is ready to standardize by 2020. But hopefully we'll get to it.
> 
> > 
> > The whole idea of language extensions, as it is right now, works
> > against Haskell Prime.
> 
> I heartily disagree here. Ideas that are now standard had to have
> started somewhere, and I really like (in theory) the way GHC/Haskell
> does this.

I'm not against language extensions completely. But using them should
be a real pain to prevent people from using then everywhere. Ideally
you should have to compile GHC manually to get a particular extension
enabled :)

> 
> The (in theory) parenthetical is because the standardization process
> has been too, well, dead to be useful. Is that changing? Perhaps. I'd
> love to see more action on that front. I'm hoping to take on a more
> active role in the committee after my dissertation is out the door (2
> more weeks!).
> > 
> > I see only one real way to change the situation -- standardize all
> > widely used extensions and declare anything new as experimental
> > unless
> > accepted by the Haskell Prime Committee.
> 
> Agreed here.

Great. So I propose to split section "9. GHC Language Features" of the
user manual into "Stable language extensions" and "Experimental
language extensions", move all the recently added extensions into the
latter one, explicitly state in the proposed process that all new
extensions go to the "Experimental" subsection initially and specify
when they go to the "Stable" subsection.


> I think that's what we're trying to do. If you have a good
> specification for GADT type inference, that would help us. :)

I'd personally prefer to mark GADT and TH as experimental. The
difficulties with their standardizing is a sign of immaturity. I regret
about each time I used them in production code.

> 
> Richard
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Proposal process status

2016-07-21 Thread Yuras Shumovich
On Thu, 2016-07-21 at 13:25 -0400, Richard Eisenberg wrote:
> > 
> > On Jul 21, 2016, at 11:29 AM, Yuras Shumovich <shumovi...@gmail.com
> > > wrote:
> > 
> > Unfortunately Haskell *is* implementation-defined language. You
> > can't
> > compile any nontrivial package from Hackage using Haskell2010 GHC.
> 
> Sadly, I agree with this statement. And I think this is what we're
> trying to change.

And I'd like it to be changed too. I'm paid for writing SW in Haskell,
and I want to have a standard. At the same time I'm (probably unusual)
Haskell fan, so I want new cool features. Don't you see a conflict of
interests?
https://www.reddit.com/r/haskell/comments/4oyxo2/blog_contributing_to_ghc/d4iaz5t

> 
> > And
> > the same will be true for Haskell2020. We rely on GHC-specific
> > extensions everywhere, directly or indirectly. If the goal of the
> > Haskell Prime is to change that, then the GHC-specific extensions
> > should not be first class citizens in the ecosystem.
> 
> My hope is that Haskell2020 will allow us to differentiate between
> standardized extensions and implementation-defined ones. A key part
> of this hope is that we'll get enough extensions in the first set to
> allow a sizeable portion of our ecosystem to used only standardized
> extensions.

It is hopeless. Haskell2020 will not include TemplateHaskell, GADTs,
etc. Haskell Prime committee will never catch up if GHC will continue
adding new extensions. In 2020 everybody will use pattern synonyms,
overloaded record fields and TypeInType, so the standard will be as far
from practice as it is now.

The whole idea of language extensions, as it is right now, works
against Haskell Prime.
https://www.reddit.com/r/haskell/comments/46jq4i/what_is_the_eventual_fate_of_all_of_these_ghc/d05q9no

I abandoned my CStructures proposal because of that. I don't want to
increase entropy.
https://phabricator.haskell.org/D252

> > 
> > We can continue pretending that Haskell is standard-defined
> > language,
> > but it will not help to change the situation. 
> 
> But writing a new standard that encompasses prevalent usage will help
> to change the situation. And that's the process I'm hoping to
> contribute to.

I see only one real way to change the situation -- standardize all
widely used extensions and declare anything new as experimental unless
accepted by the Haskell Prime Committee. Probably there are other ways,
but we need to cleanup the mess ASAP. New extensions only contribute to
the mess -- that is my point.

> 
> Richard
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Proposal process status

2016-07-21 Thread Yuras Shumovich
On Thu, 2016-07-21 at 10:32 -0400, Gershom B wrote:
> On July 21, 2016 at 8:51:15 AM, Yuras Shumovich (shumovi...@gmail.com
> ) wrote:
> > 
> > I think it is what the process should change. It makes sense to
> > have
> > two committees only if we have multiple language implementations,
> > but
> > it is not the case. Prime committee may accept or reject e.g.
> > GADTs,
> > but it will change nothing because people will continue using GADTs
> > regardless, and any feature accepted by the Prime committee will
> > necessary be compatible with GADTs extension.
> 
> I disagree. By the stated goals of the H2020 Committee, if it is
> successful, then by 2020 it will still for the most part have only
> standardized ony a _portion_ of the extentions that now exist today.

Yes, I know. But don't you see how narrow the responsibility of the
H2020 Committee is? GHC Committee makes all important decisions, and
H2020 just collects some of GHC extensions into a set of "standard"
ones. It is useful only when "nonstandard" extensions are not widely
used (e.g. marked as experimental, and are not recommended for day-to-
day use).

> 
> There’s always been a barrier between implementation and standard in
> the Haskell language, that’s precisely one of the things that _keeps_
> it from having become entirely implementation-defined despite the
> prevelance of extensions.

Unfortunately Haskell *is* implementation-defined language. You can't
compile any nontrivial package from Hackage using Haskell2010 GHC. And
the same will be true for Haskell2020. We rely on GHC-specific
extensions everywhere, directly or indirectly. If the goal of the
Haskell Prime is to change that, then the GHC-specific extensions
should not be first class citizens in the ecosystem. Otherwise there is
no sense in two committees.

We can continue pretending that Haskell is standard-defined language,
but it will not help to change the situation. 

> 
> Having two entirely different processes here (though obviously not
> without communication between the individuals involved) helps
> maintain that.
> 
> —Gershom
> 
> 
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Proposal process status

2016-07-21 Thread Yuras Shumovich
On Wed, 2016-07-20 at 18:37 +0200, Ben Gamari wrote:
> Yuras Shumovich <shumovi...@gmail.com> writes:
> 
> > Looks like reddit is a wrong place, so I'm replicating my comment
> > here:
> > 
> Thanks for your comments Yuras!
> 
> > >   * Do you feel the proposed process is an improvement over the
> > > status quo?
> > 
> > Yes, definitely. The existing process is too vague, so formalizing
> > it
> > is a win in any case.
> > 
> Good to hear.
> 
> > >   * What would you like to see changed in the proposed process,
> > > if
> > > anything?
> > 
> > The proposed process overlaps with the Language Committee powers.
> > In
> > theory the Committee works on language standard, but de facto
> > Haskell
> > is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to
> > GHC
> > adds new extension to Haskell. So I'd like the process to enforce
> > separation between experimental extensions (not recommended in
> > production code) and language improvements. I'd like the process to
> > specify how the GHC Committee is going to communicate and share
> > powers
> > with the Language Committee.
> > 
> To clarify I think Language Committee here refers to the Haskell
> Prime
> committee, right?

Yes, Herbert used "Haskell Prime 2020 committee" and "Haskell Language
committee" interchangeable in the original announcement https://mail.ha
skell.org/pipermail/haskell-prime/2016-April/004050.html

> 
> I think these two bodies really do serve different purposes.
> Historically the Haskell Prime committee has been quite conservative
> in
> the sorts of changes that they standardized; as far as I know almost
> all
> of them come from a compiler. I would imagine that the GHC Committee
> would be a gate-keeper for proposals entering GHC and only some time
> later, when the semantics and utility of the extension are
> well-understood, would the Haskell Prime committee consider
> introducing
> it to the Report. As far as I understand it, this is historically how
> things have worked in the past, and I don't think this new process
> would
> change that.

I think it is what the process should change. It makes sense to have
two committees only if we have multiple language implementations, but
it is not the case. Prime committee may accept or reject e.g. GADTs,
but it will change nothing because people will continue using GADTs
regardless, and any feature accepted by the Prime committee will
necessary be compatible with GADTs extension.

The difference between standard and GHC-specific extensions is just a
question of formal specification, interesting mostly for language
lawyer. (But it is good to have such formal specification even for GHC-
specific extensions, right?)

Probably it is time to return -fglasgow-exts back to separate standard
feature from experimental GHC-specific ones.

> 
> Of course, let me know if I'm off-base here.
> 
> Cheers,
> 
> - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Should we send Travis messages to ghc-builds?

2016-07-21 Thread Yuras Shumovich
On Thu, 2016-07-21 at 00:15 +0200, Ben Gamari wrote:
> Ben Gamari  writes:
> 
> > [ Unknown signature status ]
> > 
> > Hello everyone,
> > 
> > While picking up the pieces from a failed merge today I realized
> > that we
> > currently spend a fair bit of carbon footprint and CPU cycles
> > making
> > Travis test GHC yet the results of these tests aren't pushed
> > anywhere.
> > 
> > Would anyone object to having Travis push notifications of changes
> > in
> > red/green state to ghc-bui...@haskell.org? Perhaps this will allow
> > some
> > of us to react more quickly to regressions.
> > 
> Actually Thomas points out that we indeed used to do this and yet
> stopped because it meant that users would fork the repository, enable
> Travis build on their fork, and then inadvertantly spam the list. So,
> perhaps we shouldn't do this.

I think it could be controlled by an environment variable set in travis
UI ( https://travis-ci.org/ghc/ghc/settings ). Repository fork will not
clone the variables.

Thanks,
Yuras.

> 
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Proposal process status

2016-07-20 Thread Yuras Shumovich

Looks like reddit is a wrong place, so I'm replicating my comment here:

On Wed, 2016-07-20 at 11:36 +0200, Ben Gamari wrote:
> Hello everyone,
> 
> As you hopefully know, a few weeks ago we proposed a new process [1]
> for
> collecting, discussing, and deciding upon changes to GHC and its
> Haskell
> superset. While we have been happy to see a small contingent of
> contributors join the discussion, the number is significantly smaller
> than the set who took part in the earlier Reddit discussions.
> 
> In light of this, we are left a bit uncertain of how to proceed. So,
> we would like to ask you to let us know your feelings regarding the
> proposed process:
> 
>   * Do you feel the proposed process is an improvement over the
> status
> quo?

Yes, definitely. The existing process is too vague, so formalizing it
is a win in any case.


> 
>   * Why? (this needn't be long, just a sentence hitting the major
> points)
> 
>   * What would you like to see changed in the proposed process, if
> anything?


The proposed process overlaps with the Language Committee powers. In
theory the Committee works on language standard, but de facto Haskell
is GHC/Haskell and GHC/Haskell is Haskell. Adding new extension to GHC
adds new extension to Haskell. So I'd like the process to enforce
separation between experimental extensions (not recommended in
production code) and language improvements. I'd like the process to
specify how the GHC Committee is going to communicate and share powers
with the Language Committee.

Thanks,
Yuras.

> 
> That's all. Again, feel free to reply either on the GitHub pull
> request
> [1] or this thread if you would prefer. Your response needn't be
> long;
> we just want to get a sense of how much of the community feels that
> 1)
> this effort is worth undertaking, and 2) that the proposal before us
> is
> in fact an improvement over the current state of affairs.
> 
> Thanks for your help!
> 
> Cheers,
> 
> - Ben
> 
> 
> [1] https://github.com/ghc-proposals/ghc-proposals/pull/1
> ___
> Glasgow-haskell-users mailing list
> glasgow-haskell-us...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-user
> s
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Interruptible exception wormholes kill modularity

2016-07-02 Thread Yuras Shumovich
On Sat, 2016-07-02 at 12:29 -0400, Edward Z. Yang wrote:
> Excerpts from Yuras Shumovich's message of 2016-07-02 09:06:59 -0400:
> > On Sat, 2016-07-02 at 00:49 -0400, Edward Z. Yang wrote:
> > > 
> > > P.P.S. I have some speculations about using uninterruptibleMask
> > > more
> > > frequently: it seems to me that there ought to be a variant of
> > > uninterruptibleMask that immediately raises an exception if
> > > the "uninterruptible" action blocks.  This would probably of
> > > great assistance of noticing and eliminating blocking in
> > > uninterruptible code.
> > 
> > 
> > Could you please elaborate where it is useful. Any particular
> > example?
> 
> You would use it in any situation you use an uninterruptibleMask.
> The point is that uninterruptible code is not supposed to take
> too long (the program is unresponsive in the meantime), so it's
> fairly bad news if inside uninterruptible code you block.  The
> block = exception variant would help you find out when this occurred.

Hmm, ununterruptibleMask is used when the code can block, but you don't
want it to throw (async) exception. waitQSem is an example:
https://hackage.haskell.org/package/base-4.9.0.0/docs/src/Control.Concurrent.QSem.html#waitQSem

Basically, there are cases where code can block, yet you don't want it
to be interrupted. Why to you need uninterruptibleMask when the code
can't block anyway? It is a no-op in that case.

> 
> Arguably, it would be more Haskelly if there was a static type
> discipline for distinguishing blocking and non-blocking IO
> operations.
> But some operations are only known to be (non-)blocking at runtime,
> e.g., takeMVar/putMVar, so a dynamic discipline is necessary.

That is correct. In theory it would be useful to encode on type level
whether IO operations can block, or can be iterrupted by async
exception, or can fail with sync exception. Unfortunately it depends on
runtime, so in practice it is less useful.

> 
> > I'm interested because few years ago I proposed similar function,
> > but
> > in a bit different context. I needed it to make interruptible
> > cleanup
> > actions safe to use.
> 
> Could you elaborate more / post a link?

Sorry, I thought I added the link. I'm talking about this:
http://blog.haskell-exists.com/yuras/posts/handling-async-exceptions-in-haskell-pushing-bracket-to-the-limits.html#example-6-pushing-to-the-limits

The idea is to disable external async exceptions, but interrupt any
interruptable operation on the way. The article describes the reason I
need it.

Thanks,
Yuras.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Interruptible exception wormholes kill modularity

2016-07-02 Thread Yuras Shumovich
On Sat, 2016-07-02 at 00:49 -0400, Edward Z. Yang wrote:
> 
> P.P.S. I have some speculations about using uninterruptibleMask more
> frequently: it seems to me that there ought to be a variant of
> uninterruptibleMask that immediately raises an exception if
> the "uninterruptible" action blocks.  This would probably of
> great assistance of noticing and eliminating blocking in
> uninterruptible code.


Could you please elaborate where it is useful. Any particular example?

I'm interested because few years ago I proposed similar function, but
in a bit different context. I needed it to make interruptible cleanup
actions safe to use.

Thanks,
Yuras.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-15 Thread Yuras Shumovich
Ah, and I offer my help in case development efforts is the main
concern. Though I'm not familiar with this part of GHC, so I'd need a
mentor. My help probably will not be very useful, but I'd be happy to
participate.

On Mon, 2016-02-15 at 15:21 +0300, Yuras Shumovich wrote:
> On Mon, 2016-02-15 at 12:35 +0100, Herbert Valerio Riedel wrote:
> > On 2016-02-15 at 12:00:23 +0100, Yuras Shumovich wrote:
> > 
> > [...]
> > 
> > > > - It is possible to have unlifted types about even without
> > > > -XMagicHash. -XMagicHash is simply a lexer extension, nothing
> > > > more.
> > > > By convention, we use the # suffix with unlifted things, but
> > > > there's
> > > > no requirement here. Having -XMagicHash thus imply a flag about
> > > > the
> > > > type system is bizarre.
> > > 
> > > OK, I always forget about that. But is not it a bug already?
> > > Usually we
> > > don't allow code that uses GHC-specific extensions to compile
> > > without a
> > > language pragma. Why we don't have such pragma for levity
> > > polymorphism?
> > 
> > There are extensions which are only needed at the definition
> > site. Take {-# LANGUAGE PolyKinds #-} for instance; this is enabled
> > inside the Data.Proxy module, which defines the following type
> > 
> >   {-# LANGUAGE PolyKinds #-}
> >   module Data.Proxy where
> > 
> >   data Proxy t = Proxy
> > 
> > Now when you query via GHCi 7.10, you get the following output
> > 
> >   > import Data.Proxy
> >   > :i Proxy 
> >   type role Proxy phantom
> >   data Proxy (t :: k) = Proxy
> > -- Defined in ‘Data.Proxy’
> >   instance forall (k :: BOX) (s :: k). Bounded (Proxy s) -- Defined
> > in ‘Data.Proxy’
> >   instance forall (k :: BOX) (s :: k). Enum (Proxy s) -- Defined in
> > ‘Data.Proxy’
> >   instance forall (k :: BOX) (s :: k). Eq (Proxy s) -- Defined in
> > ‘Data.Proxy’
> >   instance Monad Proxy -- Defined in ‘Data.Proxy’
> >   instance Functor Proxy -- Defined in ‘Data.Proxy’
> >   instance forall (k :: BOX) (s :: k). Ord (Proxy s) -- Defined in
> > ‘Data.Proxy’
> >   instance forall (k :: BOX) (s :: k). Read (Proxy s) -- Defined in
> > ‘Data.Proxy’
> >   instance forall (k :: BOX) (s :: k). Show (Proxy s) -- Defined in
> > ‘Data.Proxy’
> >   instance Applicative Proxy -- Defined in ‘Data.Proxy’
> >   instance Foldable Proxy -- Defined in ‘Data.Foldable’
> >   instance Traversable Proxy -- Defined in ‘Data.Traversable’
> >   instance forall (k :: BOX) (s :: k). Monoid (Proxy s) -- Defined
> > in
> > ‘Data.Proxy’
> > 
> > even though you never enabled any extensions beyond what
> > Haskell2010
> > provides.
> > 
> > Do you consider this a bug as well?
> 
> Yes, IMO it is a bug. Though people didn't complain so far, so lets
> say
> it is a minor design flow. Probably there are more important bugs to
> fix.
> 
> Ideally language extensions should not leak to Haskell2010. E.g.
> making
> lens using TemplateHaskell doens't leak to use side because I can
> define lens by hands and preserve the API. But if something can't be
> expressed in Haskell2010, then it should require extension to be
> enabled both of definition and use sides.
> 
> In case of ($) people complain, and everybody seem to agree that
> levity
> polymorphism leaking to Haskell2010 is bad. Fixing the leakage IMO is
> the right way, while hiding the issue behind -fshow-rutime-rep is a
> hack and a lie.
> 
> Probably the right way is harder in terms of development efforts (I
> have no idea). In that case it probably makes sense to choose easier
> way and introduce a hack. Life consists of compromises.
> 
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-15 Thread Yuras Shumovich
On Mon, 2016-02-15 at 12:35 +0100, Herbert Valerio Riedel wrote:
> On 2016-02-15 at 12:00:23 +0100, Yuras Shumovich wrote:
> 
> [...]
> 
> > > - It is possible to have unlifted types about even without
> > > -XMagicHash. -XMagicHash is simply a lexer extension, nothing
> > > more.
> > > By convention, we use the # suffix with unlifted things, but
> > > there's
> > > no requirement here. Having -XMagicHash thus imply a flag about
> > > the
> > > type system is bizarre.
> > 
> > OK, I always forget about that. But is not it a bug already?
> > Usually we
> > don't allow code that uses GHC-specific extensions to compile
> > without a
> > language pragma. Why we don't have such pragma for levity
> > polymorphism?
> 
> There are extensions which are only needed at the definition
> site. Take {-# LANGUAGE PolyKinds #-} for instance; this is enabled
> inside the Data.Proxy module, which defines the following type
> 
>   {-# LANGUAGE PolyKinds #-}
>   module Data.Proxy where
> 
>   data Proxy t = Proxy
> 
> Now when you query via GHCi 7.10, you get the following output
> 
>   > import Data.Proxy
>   > :i Proxy 
>   type role Proxy phantom
>   data Proxy (t :: k) = Proxy
>   -- Defined in ‘Data.Proxy’
>   instance forall (k :: BOX) (s :: k). Bounded (Proxy s) -- Defined
> in ‘Data.Proxy’
>   instance forall (k :: BOX) (s :: k). Enum (Proxy s) -- Defined in
> ‘Data.Proxy’
>   instance forall (k :: BOX) (s :: k). Eq (Proxy s) -- Defined in
> ‘Data.Proxy’
>   instance Monad Proxy -- Defined in ‘Data.Proxy’
>   instance Functor Proxy -- Defined in ‘Data.Proxy’
>   instance forall (k :: BOX) (s :: k). Ord (Proxy s) -- Defined in
> ‘Data.Proxy’
>   instance forall (k :: BOX) (s :: k). Read (Proxy s) -- Defined in
> ‘Data.Proxy’
>   instance forall (k :: BOX) (s :: k). Show (Proxy s) -- Defined in
> ‘Data.Proxy’
>   instance Applicative Proxy -- Defined in ‘Data.Proxy’
>   instance Foldable Proxy -- Defined in ‘Data.Foldable’
>   instance Traversable Proxy -- Defined in ‘Data.Traversable’
>   instance forall (k :: BOX) (s :: k). Monoid (Proxy s) -- Defined in
> ‘Data.Proxy’
> 
> even though you never enabled any extensions beyond what Haskell2010
> provides.
> 
> Do you consider this a bug as well?

Yes, IMO it is a bug. Though people didn't complain so far, so lets say
it is a minor design flow. Probably there are more important bugs to
fix.

Ideally language extensions should not leak to Haskell2010. E.g. making
lens using TemplateHaskell doens't leak to use side because I can
define lens by hands and preserve the API. But if something can't be
expressed in Haskell2010, then it should require extension to be
enabled both of definition and use sides.

In case of ($) people complain, and everybody seem to agree that levity
polymorphism leaking to Haskell2010 is bad. Fixing the leakage IMO is
the right way, while hiding the issue behind -fshow-rutime-rep is a
hack and a lie.

Probably the right way is harder in terms of development efforts (I
have no idea). In that case it probably makes sense to choose easier
way and introduce a hack. Life consists of compromises.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-15 Thread Yuras Shumovich
On Mon, 2016-02-15 at 14:00 +0300, Yuras Shumovich wrote:
> Thank you for the reply!
> 
> On Sun, 2016-02-14 at 22:58 -0500, Richard Eisenberg wrote:
> > This approach wouldn't quite work.
> > 
> > - It seems to kick in only when instantiating a Levity variable.
> > That
> > would not happen when using :info.
> 
> Obviously Levity variables should be defaulted to Lifted everywhere,
> including :info and :type. Is it possible or are there some technical
> limitations?
> 
> > 
> > - It is possible to have unlifted types about even without
> > -XMagicHash. -XMagicHash is simply a lexer extension, nothing more.
> > By convention, we use the # suffix with unlifted things, but
> > there's
> > no requirement here. Having -XMagicHash thus imply a flag about the
> > type system is bizarre.
> 
> OK, I always forget about that. But is not it a bug already? Usually
> we
> don't allow code that uses GHC-specific extensions to compile without
> a
> language pragma. Why we don't have such pragma for levity
> polymorphism?
> If we agree that we need a pragma, then we can find a way to
> introduce
> it without massive code breakage, e.g. we can add a warning to
> -Wcompat
> and make the pragma mandatory in 3 releases. Then we can have -fshow-
> runtime-rep as a temporary solution.

Correction: we don't need -fshow-runtime-rep even temporary,
-XLevilyPolymorphism flag in ghci will be sufficient.

> 
> It naturally solves an issue with Haddock -- it should show levity
> polymorphic type when an identifier is exported from a module with
> the
> pragma, and monomorphic type otherwise. Basically that is what
> haddock
> does for KindSignatures.
> 
> > 
> > Furthermore, I'm not sure what the practical, user-visible
> > improvement would be over the approach you include and the -fshow-
> > runtime-rep idea.
> 
> The improvement is to keep ($) type as specified by Haskell2010
> report
> (including :info) and never lie about it, but allow levity
> polymorphism
> when explicitly requested by user.
> 
> > 
> > Richard
> > 
> > 
> > 
> > On Feb 13, 2016, at 11:40 AM, Yuras Shumovich <shumovi...@gmail.com
> > >
> > wrote:
> > > 
> > > Thank you for the summary! The thread is too big to find anything
> > > in
> > > it.
> > > 
> > > I'd like to present a bit different approach, kind of a
> > > compromise,
> > > without lie and code breakage: introduce a language pragma for
> > > levity
> > > polymorphism and default levity polymorphic signatures to "*"
> > > when
> > > the
> > > pragma is not enabled.
> > > 
> > > For example, ($) could be defined like it is right now:
> > > 
> > > ($)
> > >   :: forall (w :: GHC.Types.Levity) a (b :: TYPE w).
> > >  (a -> b) -> a -> b
> > > 
> > > But when it is used in a module without levity polymorphism
> > > enabled,
> > > "w" is defaulted to "Lifted", "b" gets kind "*", and ($) gets its
> > > old
> > > type:
> > > 
> > > ($)
> > >   :: (a -> b) -> a -> b
> > > 
> > > So any use of ($) with types on kind "#" is disallowed.
> > > 
> > > But with levily polymorphism enabled, one will see the full type
> > > and
> > > use ($) with unlifted types. To prevent breakage of the existing
> > > code,
> > > MagicHash extension should by default imply levity polymorphism.
> > > 
> > > What do you think? Am I missing something?
> > > 
> > > Thanks,
> > > Yuras.
> > > 
> > > >  * There are further questions regarding the appropriate kinds
> > > >    of (->) and (.) [1]
> > > > 
> > > >  * Incidentally, there is a GHC or Haddock bug [2] which causes
> > > > kind
> > > >    signatures to be unnecessarily shown in documentation for
> > > > some
> > > > types,
> > > >    exposing levities to the user.
> > > > 
> > > > The current plan to address this situation is as follows,
> > > > 
> > > >  * Introduce [3] a flag, -fshow-runtime-rep, which when
> > > > disabled
> > > > will
> > > >    cause the pretty-printer to instantiate levity-polymorphic
> > > > types
> > > > as
> > > >    lifted (e.g. resulting in *). This flag will be off by
> > > &

Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-15 Thread Yuras Shumovich
Thank you for the reply!

On Sun, 2016-02-14 at 22:58 -0500, Richard Eisenberg wrote:
> This approach wouldn't quite work.
> 
> - It seems to kick in only when instantiating a Levity variable. That
> would not happen when using :info.

Obviously Levity variables should be defaulted to Lifted everywhere,
including :info and :type. Is it possible or are there some technical
limitations?

> 
> - It is possible to have unlifted types about even without
> -XMagicHash. -XMagicHash is simply a lexer extension, nothing more.
> By convention, we use the # suffix with unlifted things, but there's
> no requirement here. Having -XMagicHash thus imply a flag about the
> type system is bizarre.

OK, I always forget about that. But is not it a bug already? Usually we
don't allow code that uses GHC-specific extensions to compile without a
language pragma. Why we don't have such pragma for levity polymorphism?
If we agree that we need a pragma, then we can find a way to introduce
it without massive code breakage, e.g. we can add a warning to -Wcompat
and make the pragma mandatory in 3 releases. Then we can have -fshow-
runtime-rep as a temporary solution.

It naturally solves an issue with Haddock -- it should show levity
polymorphic type when an identifier is exported from a module with the
pragma, and monomorphic type otherwise. Basically that is what haddock
does for KindSignatures.

> 
> Furthermore, I'm not sure what the practical, user-visible
> improvement would be over the approach you include and the -fshow-
> runtime-rep idea.

The improvement is to keep ($) type as specified by Haskell2010 report
(including :info) and never lie about it, but allow levity polymorphism
when explicitly requested by user.

> 
> Richard
> 
> 
> 
> On Feb 13, 2016, at 11:40 AM, Yuras Shumovich <shumovi...@gmail.com>
> wrote:
> > 
> > Thank you for the summary! The thread is too big to find anything
> > in
> > it.
> > 
> > I'd like to present a bit different approach, kind of a compromise,
> > without lie and code breakage: introduce a language pragma for
> > levity
> > polymorphism and default levity polymorphic signatures to "*" when
> > the
> > pragma is not enabled.
> > 
> > For example, ($) could be defined like it is right now:
> > 
> > ($)
> >   :: forall (w :: GHC.Types.Levity) a (b :: TYPE w).
> >  (a -> b) -> a -> b
> > 
> > But when it is used in a module without levity polymorphism
> > enabled,
> > "w" is defaulted to "Lifted", "b" gets kind "*", and ($) gets its
> > old
> > type:
> > 
> > ($)
> >   :: (a -> b) -> a -> b
> > 
> > So any use of ($) with types on kind "#" is disallowed.
> > 
> > But with levily polymorphism enabled, one will see the full type
> > and
> > use ($) with unlifted types. To prevent breakage of the existing
> > code,
> > MagicHash extension should by default imply levity polymorphism.
> > 
> > What do you think? Am I missing something?
> > 
> > Thanks,
> > Yuras.
> > 
> > >  * There are further questions regarding the appropriate kinds
> > >    of (->) and (.) [1]
> > > 
> > >  * Incidentally, there is a GHC or Haddock bug [2] which causes
> > > kind
> > >    signatures to be unnecessarily shown in documentation for some
> > > types,
> > >    exposing levities to the user.
> > > 
> > > The current plan to address this situation is as follows,
> > > 
> > >  * Introduce [3] a flag, -fshow-runtime-rep, which when disabled
> > > will
> > >    cause the pretty-printer to instantiate levity-polymorphic
> > > types
> > > as
> > >    lifted (e.g. resulting in *). This flag will be off by
> > > default,
> > >    meaning that users will in most cases see the usual lifted
> > > types
> > >    unless they explicitly request otherwise.
> > > 
> > >  * Fix the GHC/Haddock bug, restoring elision of unnecessary kind
> > >    signatures in documentation.
> > > 
> > >  * In the future we should seriously consider introducing an
> > > alternate
> > >    Prelude for beginners
> > >  
> > > As far as I can tell from the discussion, this was an acceptable
> > > solution to all involved. If there are any remaining objections
> > > or
> > > concerns let's discuss them in another thread.
> > > 
> > > Thanks to everyone who contributed to this effort.
> > > 
> > > Cheers,
> > > 
> > > - Ben
> > > 
> > > 
> > > [1] https://ghc.haskell.org/trac/ghc/ticket/10343#comment:27
> > > [2] https://ghc.haskell.org/trac/ghc/ticket/11567
> > > [3] https://ghc.haskell.org/trac/ghc/ticket/11549
> > > ___
> > > ghc-devs mailing list
> > > ghc-devs@haskell.org
> > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > 
> 
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New type of ($) operator in GHC 8.0 is problematic

2016-02-13 Thread Yuras Shumovich
On Sat, 2016-02-13 at 13:41 +0100, Ben Gamari wrote:
> Ryan Scott  writes:
> 
> > Hi Chris,
> > 
> > The change to ($)'s type is indeed intentional. The short answer is
> > that ($)'s type prior to GHC 8.0 was lying a little bit. If you
> > defined something like this:
> > 
> > unwrapInt :: Int -> Int#
> > unwrapInt (I# i) = i
> > 
> ...
> 
> Hello everyone,
> 
> While this thread continues to smolder, it seems that the arguments
> relevant to the levity polymorphism change have been sussed out. Now
> seems like a good time to review what we have all learned,
> 
>  * In 7.10 and earlier the type of ($) is a bit of a lie as it did
> not
>    reflect the fact that the result type was open-kinded.
> 
>    ($) also has magic to allow impredicative uses, although this is
>    orthogonal to the present levity discussion.
>    
>  * the type of ($) has changed to become more truthful in 8.0: we now
>    capture lifted-ness in the type system with the notion of Levity.
> 
>  * there is widespread belief that the new type is too noisy and
>    obfuscates the rather simple concept embodied by ($). This is
>    especially concerning for those teaching and learning the
> language.
> 
>  * One approach to fix this would be to specialize ($) for lifted
> types
>    and introduce a new levity polymorphic variant. This carries the
>    potential to break existing users of ($), although it's unclear
> how
>    much code this would affect in practice.
> 
>  * Another approach would be to preserve the current lie with
>    pretty-printer behavior. This would be relatively easy to do and
>    would allow us to avoid breaking existing users of ($). This,
>    however, comes at the expense of some potential confusion when
>    polymorphism is needed.

Thank you for the summary! The thread is too big to find anything in
it.

I'd like to present a bit different approach, kind of a compromise,
without lie and code breakage: introduce a language pragma for levity
polymorphism and default levity polymorphic signatures to "*" when the
pragma is not enabled.

For example, ($) could be defined like it is right now:

($)
  :: forall (w :: GHC.Types.Levity) a (b :: TYPE w).
     (a -> b) -> a -> b

But when it is used in a module without levity polymorphism enabled,
"w" is defaulted to "Lifted", "b" gets kind "*", and ($) gets its old
type:

($)
  :: (a -> b) -> a -> b

So any use of ($) with types on kind "#" is disallowed.

But with levily polymorphism enabled, one will see the full type and
use ($) with unlifted types. To prevent breakage of the existing code,
MagicHash extension should by default imply levity polymorphism.

What do you think? Am I missing something?

Thanks,
Yuras.

>  * There are further questions regarding the appropriate kinds
>    of (->) and (.) [1]
> 
>  * Incidentally, there is a GHC or Haddock bug [2] which causes kind
>    signatures to be unnecessarily shown in documentation for some
> types,
>    exposing levities to the user.
> 
> The current plan to address this situation is as follows,
> 
>  * Introduce [3] a flag, -fshow-runtime-rep, which when disabled will
>    cause the pretty-printer to instantiate levity-polymorphic types
> as
>    lifted (e.g. resulting in *). This flag will be off by default,
>    meaning that users will in most cases see the usual lifted types
>    unless they explicitly request otherwise.
> 
>  * Fix the GHC/Haddock bug, restoring elision of unnecessary kind
>    signatures in documentation.
> 
>  * In the future we should seriously consider introducing an
> alternate
>    Prelude for beginners
>  
> As far as I can tell from the discussion, this was an acceptable
> solution to all involved. If there are any remaining objections or
> concerns let's discuss them in another thread.
> 
> Thanks to everyone who contributed to this effort.
> 
> Cheers,
> 
> - Ben
> 
> 
> [1] https://ghc.haskell.org/trac/ghc/ticket/10343#comment:27
> [2] https://ghc.haskell.org/trac/ghc/ticket/11567
> [3] https://ghc.haskell.org/trac/ghc/ticket/11549
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Typechecker tests failures

2014-12-07 Thread Yuras Shumovich

Simon,

I tracked T7891 and tc124 failure down to simplifier, namely
`simplExprF1` for `Case`. Core lint catches the bug (requires -O1 at
least), but without -dcore-lint compiler hangs in busy loop. I made it
work with simple patch:

 diff --git a/compiler/simplCore/Simplify.hs b/compiler/simplCore/Simplify.hs
 index 7611f56..d396b60 100644
 --- a/compiler/simplCore/Simplify.hs
 +++ b/compiler/simplCore/Simplify.hs
 @@ -950,8 +950,10 @@ simplExprF1 env expr@(Lam {}) cont
  zap b | isTyVar b = b
| otherwise = zapLamIdInfo b
  
 -simplExprF1 env (Case scrut bndr _ alts) cont
 -  = simplExprF env scrut (Select NoDup bndr alts env cont)
 +simplExprF1 env (Case scrut bndr alts_ty alts) cont
 +  = do { expr - simplExprC env scrut (Select NoDup bndr alts env
 +(mkBoringStop alts_ty))
 +   ; rebuild env expr cont }
  
  simplExprF1 env (Let (Rec pairs) body) cont
= do  { env' - simplRecBndrs env (map fst pairs)

(I have no idea what most of this code does, but I learned a lot while
investigating this issue :) )

The relevant commit is:

 commit a0b2897ee406e24a05c41768a0fc2395442dfa06
 Author: Simon Peyton Jones simo...@microsoft.com
 Date:   Tue May 27 09:09:28 2014 +0100
 
 Simple refactor of the case-of-case transform
 
 More modular, less code.  No change in behaviour.

The T7861 failed because additional lambda abstraction in Core. Not sure
whether it is important.

Thanks,
Yuras

On Sat, 2014-12-06 at 19:04 +0300, Yuras Shumovich wrote:
 Hello,
 
 I was working on #9605, and found a number of failed tests:
 
 Unexpected failures:
should_compile  T7891 [exit code non-0] (hpc,optasm,optllvm)
should_compile  tc124 [exit code non-0] (hpc,optasm,optllvm)
should_run  T7861 [bad exit code]
 (normal,hpc,optasm,ghci,threaded1,threaded2,dyn,optllvm)
 
 They seem to be unrelated to my work, and they are skipped when fast
 is enabled.
 
 T7891 and tc124 fail with Core Lint errors.
 
 Looks like Phabricator uses fast way to validate revisions, so the
 failures were not noticed.
 
 Thanks,
 Yuras
 
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Typechecker tests failures

2014-12-06 Thread Yuras Shumovich

Hello,

I was working on #9605, and found a number of failed tests:

Unexpected failures:
   should_compile  T7891 [exit code non-0] (hpc,optasm,optllvm)
   should_compile  tc124 [exit code non-0] (hpc,optasm,optllvm)
   should_run  T7861 [bad exit code]
(normal,hpc,optasm,ghci,threaded1,threaded2,dyn,optllvm)

They seem to be unrelated to my work, and they are skipped when fast
is enabled.

T7891 and tc124 fail with Core Lint errors.

Looks like Phabricator uses fast way to validate revisions, so the
failures were not noticed.

Thanks,
Yuras


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: mask in waitQSem

2014-11-25 Thread Yuras Shumovich
On Tue, 2014-11-25 at 13:53 +, Simon Marlow wrote:
 On 24/11/2014 11:12, Yuras Shumovich wrote:
  On Mon, 2014-11-24 at 10:09 +, Simon Marlow wrote:
  On 14/11/2014 21:23, Yuras Shumovich wrote:
 
  I was reviewing some exception handling code in base library, and I
  found this line:
  https://phabricator.haskell.org/diffusion/GHC/browse/master/libraries/base/Control/Concurrent/QSem.hs;165072b334ebb2ccbef38a963ac4d126f1e08c96$74
 
  Here mask is used, but I looks completely useless for me. waitQSem
  itself should be called with async exceptions masked, otherwise there is
  no way to prevent resource leak.
 
  Do anyone know why mask is used here?
 
  I wrote that code.  It looks like mask_ is important, otherwise an async
  exception might leave the MVar empty, no?  We can't assume that waitQSem
  is always called inside a mask_, so it does its own mask_.
 
  Hmm, yes, you are right.
 
  Note, that in case of failure (and without external `mask`) it is not
  possible to determine, whether a unit of resource was reserved or not.
  It makes the particular instance of `QSem` unusable anymore, and we have
  to create other one. Probably there are situations where it is OK
  though.
 
 I don't think that's true: if waitQSem fails, then the semaphore has not 
 been acquired.  This guarantee is important for bracket waitQSem 
 signalSQem to work.  I just peered at the code again and I believe this 
 is true - have you spotted a case where it might be false?  If so that's 
 a bug.

The note was about the case when waitQSem is *not* called inside mask
(as you said we can't assume that). In that case it can be interrupted
by async exception either before entering mask_ or after leaving it. Is
it true? That makes waitQSem not very useful outside bracket.

When used inside bracket, it seems to be safe.

 
 Cheers,
 Simon
 
 
 
  Thanks,
  Yuras
 
 
  Cheers,
  Simon
 
 
  I wonder whether an author of the code tried to do something different,
  so there actually can be a bug hidden here. Probably
  uninterruptibleMask_ should be used here? (I don't think so though.)
 
  Thanks,
  Yuras
 
 
  ___
  ghc-devs mailing list
  ghc-devs@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs
 
 
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: mask in waitQSem

2014-11-24 Thread Yuras Shumovich
On Mon, 2014-11-24 at 10:09 +, Simon Marlow wrote:
 On 14/11/2014 21:23, Yuras Shumovich wrote:
 
  I was reviewing some exception handling code in base library, and I
  found this line:
  https://phabricator.haskell.org/diffusion/GHC/browse/master/libraries/base/Control/Concurrent/QSem.hs;165072b334ebb2ccbef38a963ac4d126f1e08c96$74
 
  Here mask is used, but I looks completely useless for me. waitQSem
  itself should be called with async exceptions masked, otherwise there is
  no way to prevent resource leak.
 
  Do anyone know why mask is used here?
 
 I wrote that code.  It looks like mask_ is important, otherwise an async 
 exception might leave the MVar empty, no?  We can't assume that waitQSem 
 is always called inside a mask_, so it does its own mask_.

Hmm, yes, you are right.

Note, that in case of failure (and without external `mask`) it is not
possible to determine, whether a unit of resource was reserved or not.
It makes the particular instance of `QSem` unusable anymore, and we have
to create other one. Probably there are situations where it is OK
though.

Thanks,
Yuras

 
 Cheers,
 Simon
 
 
  I wonder whether an author of the code tried to do something different,
  so there actually can be a bug hidden here. Probably
  uninterruptibleMask_ should be used here? (I don't think so though.)
 
  Thanks,
  Yuras
 
 
  ___
  ghc-devs mailing list
  ghc-devs@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


mask in waitQSem

2014-11-14 Thread Yuras Shumovich

Hello,

I was reviewing some exception handling code in base library, and I
found this line:
https://phabricator.haskell.org/diffusion/GHC/browse/master/libraries/base/Control/Concurrent/QSem.hs;165072b334ebb2ccbef38a963ac4d126f1e08c96$74

Here mask is used, but I looks completely useless for me. waitQSem
itself should be called with async exceptions masked, otherwise there is
no way to prevent resource leak.

Do anyone know why mask is used here?

I wonder whether an author of the code tried to do something different,
so there actually can be a bug hidden here. Probably
uninterruptibleMask_ should be used here? (I don't think so though.)

Thanks,
Yuras


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: mask in waitQSem

2014-11-14 Thread Yuras Shumovich
On Fri, 2014-11-14 at 16:30 -0500, Brandon Allbery wrote:
 On Fri, Nov 14, 2014 at 4:23 PM, Yuras Shumovich shumovi...@gmail.com
 wrote:
 
  Here mask is used, but I looks completely useless for me. waitQSem
  itself should be called with async exceptions masked, otherwise there is
  no way to prevent resource leak.
 
  Do anyone know why mask is used here?
 
 
 I thought QSem was known to be completely unsafe and
 http://hackage.haskell.org/package/SafeSemaphore was recommended instead,
 with QSem and friends slated for removal?
 
 In any case, there are probably leftover attempts to make it safe, yes.
 

Oh, that explains it. Thank you a lot.

Yuras



___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: `arc` changes my commit messages

2014-10-21 Thread Yuras Shumovich
On Tue, 2014-10-21 at 09:34 -0400, Richard Eisenberg wrote:
 Hi all,
 
 Is there a way to put `arc` into a read-only mode?

Not sure it is relevant, please ignore me if it is not.

Does arc diff --preview work for you? It will create a diff without
creating revision and changing anything locally. Then you can attach the
diff to an existent revision or create new one.

 
 Frequently while working on a patch, I make several commits, preferring to 
 separate out testing commits from productive work commits and non-productive 
 (whitespace, comments) commits. Sometimes each of these categories are 
 themselves broken into several commits. These commits are *not* my internal 
 workflow. They are intentionally curated by rebasing as I'm ready to publish 
 the patch, as I think the patches are easy to read this way. (Tell me if I'm 
 wrong, here!) I've resolved myself not to use `arc land`, but instead to 
 apply the patch using git.
 
 Yet, when I call `arc diff`, even if I haven't amended my patch during the 
 `arc diff`ing process, the commit message of the tip of my branch is changed, 
 and without telling me. I recently pushed my (tiny, uninteresting) fix to 
 #9692. Luckily, my last commit happened to be the meat, so the amended commit 
 message is still wholly relevant. But, that won't always be the case, and I 
 was surprised to see a Phab-ified commit message appear in the Trac ticket 
 after pushing.
 
 I know I could use more git-ery to restore my old commit message. But is 
 there a way to stop `arc` from doing the message change in the first place?
 
 Thanks!
 Richard
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Wiki: special namespace for proposals?

2014-10-14 Thread Yuras Shumovich

Hello,

Would it be better to organize proposals under one namespace? Right now
they belongs to root namespace, so title index
( https://ghc.haskell.org/trac/ghc/wiki/TitleIndex ) is hard to use.

I was going to start new page describing language extension, but I don't
want do increase entropy even more. What about creating special
namespace, e.g. Proposals? Probably makes sense to divide if farther?

Thanks,
Yuras

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: heres a 32bit OS X 7.8.3 build

2014-10-09 Thread Yuras Shumovich
Hello carter,

I tried to install it, but get the error (see bellow.)
I did the usual thing I do on linux: ./configure --prefix=...  sudo make
install

The tail of the log:

Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/terminfo-0.4.0.0

utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist copy
libraries/haskeline dist-install strip '' '/opt/ghc-7.8.3_x86'
'/opt/ghc-7.8.3_x86/lib/ghc-7.8.3'
'/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn'

Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/haskeline-0.7.1.2

utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist copy compiler
stage2 strip '' '/opt/ghc-7.8.3_x86' '/opt/ghc-7.8.3_x86/lib/ghc-7.8.3'
'/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn'

Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/ghc-7.8.3

utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist copy
libraries/old-time dist-install strip '' '/opt/ghc-7.8.3_x86'
'/opt/ghc-7.8.3_x86/lib/ghc-7.8.3'
'/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn'

Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/old-time-1.1.0.2

utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist copy
libraries/haskell98 dist-install strip '' '/opt/ghc-7.8.3_x86'
'/opt/ghc-7.8.3_x86/lib/ghc-7.8.3'
'/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn'

Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/haskell98-2.0.0.3

utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist copy
libraries/haskell2010 dist-install strip '' '/opt/ghc-7.8.3_x86'
'/opt/ghc-7.8.3_x86/lib/ghc-7.8.3'
'/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' 'v p dyn'

Installing library in /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/haskell2010-1.1.2.0

/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc-pkg --force --global-package-db
/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/package.conf.d update
rts/dist/package.conf.install

Reading package info from rts/dist/package.conf.install ... done.

utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist register
libraries/ghc-prim dist-install /opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc
/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc-pkg
/opt/ghc-7.8.3_x86/lib/ghc-7.8.3 '' '/opt/ghc-7.8.3_x86'
'/opt/ghc-7.8.3_x86/lib/ghc-7.8.3'
'/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' NO

Registering ghc-prim-0.3.1.0...

utils/ghc-cabal/dist-install/build/tmp/ghc-cabal-bindist register
libraries/integer-gmp dist-install
/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc
/opt/ghc-7.8.3_x86/lib/ghc-7.8.3/bin/ghc-pkg
/opt/ghc-7.8.3_x86/lib/ghc-7.8.3 '' '/opt/ghc-7.8.3_x86'
'/opt/ghc-7.8.3_x86/lib/ghc-7.8.3'
'/opt/ghc-7.8.3_x86/share/doc/ghc/html/libraries' NO

Registering integer-gmp-0.5.1.0...

ghc-cabal: integer-gmp-0.5.1.0: library-dirs: yes is a relative path which

makes no sense (as there is nothing for it to be relative to). You can make

paths relative to the package database itself by using ${pkgroot}. (use

--force to override)

make[1]: *** [install_packages] Error 1

make: *** [install] Error 2

Thanks,
Yuras

2014-10-08 22:39 GMT+03:00 Carter Schonwald carter.schonw...@gmail.com:

 hey all,

 I know all of you wish you could run 32bit ghc 7.8.3 on your snazzy mac OS
 10.9, so here you are!


 http://www.wellposed.com.s3.amazonaws.com/opensource/ghc/releasebuild-unofficial/ghc-7.8.3-i386-apple-darwin.tar.bz2


 $ shasum -a256 ghc-7.8.3-i386-apple-darwin.tar.bz2
 1268ce020b46b0b459b8713916466cb92ce0c54992a76b265db203e9ef5fb5e5
  ghc-7.8.3-i386-apple-darwin.tar.bz2

 is the relevant SHA 256 digest

 NB: I believe I managed to build it with intree-gmp too! So it wont' need
 GMP installed in the system (but I could be wrong, in which case brew
 install gmp will suffice)

 cheers
 -Carter

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Tentative high-level plans for 7.10.1

2014-10-07 Thread Yuras Shumovich
Hello,

Note: you actually don't have to backport anything. Leave it for people
how are interested in LTS release.

As haskell enthusiast, I like all the features GHC comes with each
release. But as working haskell programmer I'm tired. All my code I
wrote at work will probably work with ghc-6.8, but I have to switch to
newer ghc twice a year. (The last time it was because gcc/clang issue on
mac os)

LTS release means you MAY backport fixes. If you want or have time, if
there are people interested in that, etc.
Probably we'll have more chances that hackage libraries will support LTS
releases longer then they support regular releases now. As a result it
will be easer to introduce breaking changes like AMP or
Traversable/Foldable proposal.

Thanks,
Yuras

On Mon, 2014-10-06 at 19:45 -0500, Austin Seipp wrote:
 The steps for making a GHC release are here:
 https://ghc.haskell.org/trac/ghc/wiki/MakingReleases
 
 So, for the record, making a release is not *that* arduous, but it
 does take time. On average it will take me about 1 day or so to go
 from absolutely-nothing to release announcement:
 
  1. Bump version, update configure.ac, tag.
  2. Build source tarball (this requires 1 build, but can be done very 
 quickly).
  3. Make N binary builds for each platform (the most time consuming
 part, as this requires heavy optimizations in the builds).
  4. Upload documentation for all libraries.
  5. Update webpage and upload binaries.
  6. Send announcement.
  7. Upload binaries from other systems later.
 
 Herbert has graciously begun taking care of stewarding and uploading
 the libraries. So, there are a few steps we could introduce to
 alleviate this process technically in a few ways, but ultimately all
 of these have to happen, pretty much (regardless of the automation
 involved).
 
 But I don't think this is the real problem.
 
 The real problem is that GHC moves forward in terms of implementation
 extremely, extremely quickly. It is not clear how to reconcile this
 development pace with something like needing dozens of LTS releases
 for a stable version. At least, not without a lot of concentrated
 effort from almost every single developer. A lot of it can be
 alleviated through social process perhaps, but it's not strictly
 technical IMO.
 
 What do I mean by that? I mean that:
 
  - We may introduce a feature in GHC version X.Y
  - That might have a bug, or other problems.
  - We may fix it, and in the process, fix up a few other things and
 refactor HEAD, which will be GHC X.Y+2 eventually.
  - Repeat steps 2-3 a few times.
  - Now we want to backport the fixes for that feature in HEAD back to X.Y.
  - But GHC X.Y has *significantly* diverged from HEAD in that
 timeframe, because of step 3 being repeated!
 
 In other words: we are often so aggressive at refactoring code that
 the *act* of backporting in and of itself can be complicated, and it
 gets harder as time goes on - because often the GHC of a year ago is
 so much different than the GHC of today.
 
 As a concrete example of this, let's look at the changes between GHC
 7.8.2 and GHC 7.8.3:
 
 https://github.com/ghc/ghc/compare/ghc-7.8.2-release...ghc-7.8.3-release
 
 There are about ~110 commits between 7.8.2 and 7.8.3. But as the 7.8
 branch lived on, backporting fixes became significantly more complex.
 In fact, I estimate close to 30 of those commits were NOT direct 7.8
 requirements - but they were brought in because _actual fixes_ were
 dependent on them, in non-trivial ways.
 
 Take for example f895f33 by Simon PJ, which fixes #9023. The problem
 with f895f33 is that by the time we fixed the bug in HEAD with that
 commit, the history had changed significantly from the branch. In
 order to get f895f33 to plant easily, I had to backport *at least* 12
 to 15 other commits, which it was dependent upon, and commits those
 commits were dependent upon, etc etc. I did not see any non-trivial
 way to do this otherwise.
 
 I believe at one point Gergo backported some of his fixes to 7.8,
 which had since become 'non applicable' (and I thank him for that
 greatly), but inevitably we instead brought along the few extra
 changes anyway, since they were *still* needed for other fixes. And
 some of them had API changes. So the choice was to rewrite 4 patches
 for an old codebase completely (the work being done by two separate
 people) or backport a few extra patches.
 
 The above is obviously an extreme case. But it stands to reason this
 would _only happen again_ with 7.8.4, probably even worse since more
 months of development have gone by.
 
 An LTS release would mandate things like no-API-changes-at-all, but
 this significantly limits our ability to *actually* backport patches
 sometimes, like the above, due to dependent changes. The alternative,
 obviously, is to do what Gergo did and manually re-write such a fix
 for the older branch. But that means we would have had to do that for
 *every patch* in the same boat, including 2 or 3 other fixes we
 

Re: FFI: c/c++ struct on stack as an argument or return value

2014-10-07 Thread Yuras Shumovich

Simon,

I finally managed to implement that for major NCG backends.
Phabricator revision is here: https://phabricator.haskell.org/D252

Here is a link to the review you did before:
https://github.com/Yuras/ghc/commit/7295a4c600bc69129b6800be5b52c3842c9c4e5b

I don't have implementation for mac os x86, ppc and sparc. Are they
actively used today? I don't have access to hardware to test them.

Do you think it has it's own value without exposing to haskell FFI? What
is the minimal feature set I should implement to make it merged?

Thanks,
Yuras

On Sat, 2014-06-14 at 18:57 +0300, Yuras Shumovich wrote:
 Hello,
 
 I implemented support for returning C structures by value in cmm for
 x86_64 (most likely it works only on linux). You can find it here:
 https://github.com/Yuras/ghc/commits/cmm-cstruct
 
 It supports I8, I16, I32, I64, F_, D_ cmm types, and  requires special
 annotation. For example:
 
 #include Cmm.h
 
 #define MyStruct struct(CInt, I8, struct(I8, CInt))
 
 cmm_test(W_ i)
 {
   CInt i1;
   I8 i2, i3;
   float32 i4;
   (i1, i2, i3, i4) = ccall c_test(W_TO_INT(i)) MyStruct;
   return (TO_W_(i1), TO_W_(i2), TO_W_(i3), i4);
 }
 
 (See test directory for full examples.)
 
 
 Do you think it is right approach?
 Could anyone review the code please?
 
 And the last thing, I need mentor for this project. Is anyone interested?
 
 Thanks,
 Yuras
 
 On Tue, 2014-03-18 at 21:30 +, Simon Marlow wrote:
  So the hard parts are:
  
- the native code generators
- native adjustor support (rts/Adjustor.c)
  
  Everything else is relatively striaghtforward: we use libffi for 
  adjustors on some platforms and for GHCi, and the LLVM backend should be 
  quite easy too.
  
  I would at least take a look at the hard bits and see whether you think 
  it's going to be possible to extend these to handle struct args/returns. 
Because if not, then the idea is a dead end.  Or maybe we will need to 
  limit the scope to make things easier (e.g. only integer and pointer 
  fields).
  
  Cheers,
  Simon
  
  On 18/03/2014 17:31, Yuras Shumovich wrote:
   Hi,
  
   I thought I have lost the battle :)
   Thank you for the support, Simon!
  
   I'm interested in full featured solution: arguments, return value,
   foreign import, foreign export, etc. But it is too much for me to do it
   all at once. So I started with dynamic wrapper.
  
   The plan is to support structs as arguments and return value for dynamic
   wrapper using libffi;
   then implement native adjustors at least for x86_64 linux;
   then make final design decision (tuple or data? language pragma? union
   support? etc);
   and only then start working on foreign import.
  
   But I'm open for suggestions. Just let me know if you think it is better
   to start with return value support for foreign import.
  
   Thanks,
   Yuras
  
   On Tue, 2014-03-18 at 12:19 +, Simon Marlow wrote:
   I'm really keen to have support for returning structs in particular.
   Passing structs less so, because working around the lack of struct
   passing isn't nearly as onerous as working around the lack of struct
   returns.  Returning multiple values from a C function is a real pain
   without struct returns: you have to either allocate some memory in
   Haskell or in C, and both methods are needlessly complex and slow.
   (though allocating in Haskell is usually better.) C++ code does this all
   the time, so if you're wrapping C++ code for calling from Haskell, the
   lack of multiple returns bites a lot.
  
   In fact implementing this is on my todo list, I'm really glad to see
   someone else is planning to do it :-)
  
   The vague plan I had in my head was to allow the return value of a
   foreign import to be a tuple containing marshallable types, which would
   map to the appropriate return convention for a struct on the current
   platform.  Perhaps allowing it to be an arbitrary single-constructor
   type is better, because it allows us to use a type that has a Storable
   instance.
  
   Cheers,
   Simon
  
  
  
 
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


CmmLint error when doing safe ccall from cmm

2014-06-20 Thread Yuras Shumovich
Hello,

I'm trying to do safe ccall from cmm (see below for the code). It seems
to work, but -dcmm-lint is not satisfied:

/opt/ghc-7.8.2/bin/ghc --make -o test hs.hs cmm.cmm c.c -dcmm-lint 
-fforce-recomp
Cmm lint error:
  in basic block c4
in assignment: 
  _c1::I32 = R1;
  Reg ty: I32
  Rhs ty: I64
Program was:
  {offset
c5: _c0::I64 = R1;
_c2::I64 = c_test;
_c3::I32 = %MO_UU_Conv_W64_W32(_c0::I64);
I64[(youngc4 + 8)] = c4;
foreign call ccall arg hints:  []  result hints:  [] (_c2::I64)(...) 
returns to c4 args: ([_c3::I32]) ress: ([_c1::I32])ret_args: 8ret_off: 8;
c4: _c1::I32 = R1;
R1 = %MO_SS_Conv_W32_W64(_c1::I32);
call (P64[(old + 8)])(R1) args: 8, res: 0, upd: 8;
  }

no location info: 
Compilation had errors


The same code without safe annotation passes cmm lint. Is it my error
or ghc bug? How can I do safe ccall in cmm correctly?

Here is the code:

== c.c ==
#include assert.h

int c_test(int i)
{
  assert(i == 1);
  return 2;
}

== cmm.cmm
#include Cmm.h

cmm_test(W_ i)
{
  CInt i1;
  (i1) = ccall c_test(W_TO_INT(i)) safe;
  return (TO_W_(i1));
}

== hs.hs ==
{-# LANGUAGE GHCForeignImportPrim #-}
{-# LANGUAGE ForeignFunctionInterface #-}
{-# LANGUAGE MagicHash #-}
{-# LANGUAGE UnboxedTuples #-}
{-# LANGUAGE UnliftedFFITypes #-}

import GHC.Prim
import GHC.Types
import Control.Exception

foreign import prim cmm_test test :: Int# - Int#

main :: IO ()
main = do
  let i1 = test 1#
  assert (I# i1 == 2) (return ())


Thanks,
Yuras


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: CmmLint error when doing safe ccall from cmm

2014-06-20 Thread Yuras Shumovich
Simon,

Sorry if I'm too stupid, but
do you mean we only support 64-bit results from prim call? But I'm
using TO_W_ macro to convert the result to 64-bit value before returning
from cmm function.
Or you mean result from ccall call?
nativeGen/X86/CodeGen.hs:genCCall64 definitely supports that. And it
works for unsafe ccall.
Looks like the issue is somewhere in translation from high level cmm to
low level cmm.

Thanks,
Yuras

On Fri, 2014-06-20 at 21:24 +0100, Simon Marlow wrote:
 On 20/06/14 15:03, Yuras Shumovich wrote:
  Hello,
 
  I'm trying to do safe ccall from cmm (see below for the code). It seems
  to work, but -dcmm-lint is not satisfied:
 
  /opt/ghc-7.8.2/bin/ghc --make -o test hs.hs cmm.cmm c.c -dcmm-lint 
  -fforce-recomp
  Cmm lint error:
 in basic block c4
   in assignment:
 _c1::I32 = R1;
 Reg ty: I32
 Rhs ty: I64
  Program was:
 {offset
   c5: _c0::I64 = R1;
   _c2::I64 = c_test;
   _c3::I32 = %MO_UU_Conv_W64_W32(_c0::I64);
   I64[(youngc4 + 8)] = c4;
   foreign call ccall arg hints:  []  result hints:  [] 
  (_c2::I64)(...) returns to c4 args: ([_c3::I32]) ress: 
  ([_c1::I32])ret_args: 8ret_off: 8;
   c4: _c1::I32 = R1;
   R1 = %MO_SS_Conv_W32_W64(_c1::I32);
   call (P64[(old + 8)])(R1) args: 8, res: 0, upd: 8;
 }
 
  no location info:
  Compilation had errors
 
 I believe we only support 64-bit results on a 64-bit platform, but we 
 you can always narrow to 32 bits with an MO_Conv afterwards if you want. 
   This is essentially what happens when you call a function that returns 
 CInt using the FFI - you can always try that and see what Cmm you get.
 
 Also, I'll be mildly surprised if using safe foreign calls from 
 hand-written Cmm works, since I don't believe we use them anywhere so it 
 isn't likely to be well tested :-)
 
 Cheers,
 Simon
 
 
  The same code without safe annotation passes cmm lint. Is it my error
  or ghc bug? How can I do safe ccall in cmm correctly?
 
  Here is the code:
 
  == c.c ==
  #include assert.h
 
  int c_test(int i)
  {
 assert(i == 1);
 return 2;
  }
 
  == cmm.cmm
  #include Cmm.h
 
  cmm_test(W_ i)
  {
 CInt i1;
 (i1) = ccall c_test(W_TO_INT(i)) safe;
 return (TO_W_(i1));
  }
 
  == hs.hs ==
  {-# LANGUAGE GHCForeignImportPrim #-}
  {-# LANGUAGE ForeignFunctionInterface #-}
  {-# LANGUAGE MagicHash #-}
  {-# LANGUAGE UnboxedTuples #-}
  {-# LANGUAGE UnliftedFFITypes #-}
 
  import GHC.Prim
  import GHC.Types
  import Control.Exception
 
  foreign import prim cmm_test test :: Int# - Int#
 
  main :: IO ()
  main = do
 let i1 = test 1#
 assert (I# i1 == 2) (return ())
 
 
  Thanks,
  Yuras
 
 
  ___
  ghc-devs mailing list
  ghc-devs@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs
 
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: C-- specfication

2014-05-03 Thread Yuras Shumovich
Are you interested in ghc's cmm? It is different from the original C--.
(And it differs between ghc versions.)

Here is the best thing I found:
https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmParse.y

There is a wiki page:
https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/CmmType
But is it out of date and sometimes misleading.

On Sat, 2014-05-03 at 12:05 +0200, Florian Weimer wrote:
 I'm looking for a specification of C--.  I can't find it on the
 cminuscminus.org web site, and it's also not included in the release
 tarball.  Does anybody know where to get it?
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-18 Thread Yuras Shumovich
On Tue, 2014-03-18 at 12:37 +1100, Manuel M T Chakravarty wrote:
  
  Library implementation can't generate native dynamic wrapper, it has to
  use slow libffi.
 
 When we first implemented the FFI, there was no libffi. Maintaining the 
 adjustor code for all platforms is a PITA; hence, using libffi was a welcome 
 way to improve portability.

Do you think we can remove native adjustors? I can prepare a patch.

It requires minor changes to cache ffi_cif structure. On desugar phase
for each wrapper we can generate fresh global variable to store cif
pointer and pass it to createAdjustor.

 
  
  From my point of view, at this point it is more important to agree on
  the next question: do we want such functionality in ghc at all? I don't
  want to waste time on it if nobody wants to see it merged.
 
 I still don’t see the benefit in further complicating an already murky corner 
 of the compiler. Moreover, for this to make sense, it would need to work on 
 all supported platforms. Unless you are volunteering to implement it on 
 multiple platforms, this would mean, we’d use libffi for most platforms 
 anyway. This brings me to my original point, a library or tool is the better 
 place for this.

OK, I don't buy it, but I see your point.

 
 Manuel
 
 PS: I’d happily accept language-c-inline patches for marshalling structs.
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-18 Thread Yuras Shumovich
Hi,

I thought I have lost the battle :)
Thank you for the support, Simon!

I'm interested in full featured solution: arguments, return value,
foreign import, foreign export, etc. But it is too much for me to do it
all at once. So I started with dynamic wrapper.

The plan is to support structs as arguments and return value for dynamic
wrapper using libffi;
then implement native adjustors at least for x86_64 linux;
then make final design decision (tuple or data? language pragma? union
support? etc);
and only then start working on foreign import. 

But I'm open for suggestions. Just let me know if you think it is better
to start with return value support for foreign import. 

Thanks,
Yuras

On Tue, 2014-03-18 at 12:19 +, Simon Marlow wrote:
 I'm really keen to have support for returning structs in particular. 
 Passing structs less so, because working around the lack of struct 
 passing isn't nearly as onerous as working around the lack of struct 
 returns.  Returning multiple values from a C function is a real pain 
 without struct returns: you have to either allocate some memory in 
 Haskell or in C, and both methods are needlessly complex and slow. 
 (though allocating in Haskell is usually better.) C++ code does this all 
 the time, so if you're wrapping C++ code for calling from Haskell, the 
 lack of multiple returns bites a lot.
 
 In fact implementing this is on my todo list, I'm really glad to see 
 someone else is planning to do it :-)
 
 The vague plan I had in my head was to allow the return value of a 
 foreign import to be a tuple containing marshallable types, which would 
 map to the appropriate return convention for a struct on the current 
 platform.  Perhaps allowing it to be an arbitrary single-constructor 
 type is better, because it allows us to use a type that has a Storable 
 instance.
 
 Cheers,
 Simon
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-15 Thread Yuras Shumovich
Manuel,

I think the compiler is the right place. It is impossible to have
efficient implementation in a library.

For dynamic wrapper (foreign import wrapper stuff) ghc generates piece
of executable code at runtime. There are native implementations for a
number of platforms, and libffi is used as a fall back for other
platforms (see rts/Adjustor.c). AFAIK it is done that way because libffi
is slower then native implementation.

Library implementation can't generate native dynamic wrapper, it has to
use slow libffi.


 On Sat, 2014-03-15 at 00:17 -0400, Carter Schonwald wrote: 
  indeed, its very very easy to do storable instances that correspond to the
  struct type you want,
  
  the ``with`` function in
  http://hackage.haskell.org/package/base-4.6.0.1/docs/Foreign-Marshal-Utils.htmlforeigh.marshal.utils
  actually gets you most of the way there!

Not sure I understand. `with` can be used to provide C function with pointer to 
C structure.
How can it help when C function requires C structure to be passed by value?


 i think the crux of Manuel's point is mainly that any good proposal
 has to
 at least give a roadmap to support on all the various platforms etc
 etc

I don't think you are expecting detailed schedule from me. Passing
structure by value is possible on all platforms ghc supports, and it can
be implemented for any particular platform if somebody is interested.

From my point of view, at this point it is more important to agree on
the next question: do we want such functionality in ghc at all? I don't
want to waste time on it if nobody wants to see it merged.

Thanks,
Yuras

On Sat, 2014-03-15 at 00:37 -0400, Carter Schonwald wrote:
 I'm not opposing that, in fact, theres a GHC ticket discussing some stuff
 related to this (related to complex numbers).
 
 i think the crux of Manuel's point is mainly that any good proposal has to
 at least give a roadmap to support on all the various platforms etc etc
 
 
 On Sat, Mar 15, 2014 at 12:33 AM, Edward Kmett ekm...@gmail.com wrote:
 
  I don't care enough to fight and try to win the battle, but I just want to
  point out that Storable structs are far more brittle and platform dependent
  than borrowing the already correct platform logic for struct passing from
  libffi.
 
  I do think the existing FFI extension made the right call under the 32 bit
  ABIs that were in use at the time it was defined. That said, with 64-bit
  ABIs saying that 2 32-bit ints should be passed in a single 64 bit
  register, you wind up with large chunks of third party APIs we just can't
  call out to directly any more, requiring many one-off manual C shims.
 
  -Edward
 
 
 
 
  On Sat, Mar 15, 2014 at 12:17 AM, Carter Schonwald 
  carter.schonw...@gmail.com wrote:
 
  indeed, its very very easy to do storable instances that correspond to
  the struct type you want,
 
  the ``with`` function in
  http://hackage.haskell.org/package/base-4.6.0.1/docs/Foreign-Marshal-Utils.htmlforeigh.marshal.utils
   actually gets you most of the way there!
 
 
 
 
  On Sat, Mar 15, 2014 at 12:00 AM, Manuel M T Chakravarty 
  c...@cse.unsw.edu.au wrote:
 
  Yuras,
 
  I’m not convinced that the compiler is the right place for this kind of
  functionality. In fact, when we designed the Haskell FFI, we explicit
  decided against what you propose. There are a few reasons for this.
 
  Firstly, compilers are complex beasts, and secondly, it takes a long
  time until a change in the compiler goes into production. Hence, as a
  general rule, it is advisable to move complexity from the compiler into
  libraries as this reduces compiler complexity. Libraries are less complex
  and changes can be rolled out much more quickly (it’s essentially a 
  Hackage
  upload versus waiting for the next GHC and Haskell Platform release).
 
  Thirdly, we have got the Haskell standard for a reason and modifying the
  compiler implies a language extension.
 
  The design goal for the Haskell FFI was to provide the absolute minimum
  as part of the language and compiler, and to layer additional conveniences
  on top of that in the form of libraries and tools.
 
  Have you considered the library or tool route?
 
  Manuel
 
  Yuras Shumovich shumovi...@gmail.com:
   Hi,
  
   Right now ghc's FFI doesn't support c/c++ structures.
  
   Whenever we have foreign function that accepts or returns struct by
   value, we have to create wrapper that accepts or returns pointer to
   struct. It is inconvenient, but actually not a big deal.
  
   But there is no easy workaround when you want to export haskell
  function
   to use it with c/c++ API that requires structures to be passed by value
   (Usually it is a callback in c/c++ API. You can't change it's
  signature,
   and if it doesn't provide some kind of void* userdata, then you are
   stuck.)
  
   I'm interested in fixing that. I'm going to start with 'foreign import
   wrapper ...' stuff.
  
   Calling conventions for passing c/c++ structures by value are pretty

Re: FFI: c/c++ struct on stack as an argument or return value

2014-03-14 Thread Yuras Shumovich
On Fri, 2014-03-14 at 09:08 -0400, Edward Kmett wrote:
 I spent some time hacking around on this from a library perspective when I
 had to interoperate with a bunch of Objective C on a 64-bit mac as many of
 the core library functions you need to FFI out to pass around pairs of Int32s
 as a struct small enough by the x64 ABI to get shoehorned into one
 register, and as I was programmatically cloning Objective C APIs via
 template haskell I couldn't use the usual clunky C shims.

Was it related to language-c-inline package?

 So if nothing else, you can at least take this as a vote of confidence that
 your idea isn't crazy. =)
 
 I'd also be happy to answer questions if you get stuck or need help.

Thank you, Edward

Since there is at least one person how is interested in, I'll start
asking questions. Please let me know when I become too noisy :)

For now I'm focused on desugaring phase. Right now

type Fn = CInt - CInt - IO ()
foreign import ccall wrapper f :: Fn - IO (FunPtr Fn)

is desugared into

f :: Fn - IO (FunPtr Fn)
f hsFunc = do
  sPtr - newStablePtr hsFunc
  createAdjustor sPtr staticWrapper ...

Here staticWrapper -- address of C function. It will dereference the
sPtr, cast to StgClosure* and call with appropriate arguments. All the
arguments are primitive C types (int, char, pointer, etc), so it is easy
to convert them to corresponding haskell types via rts_mkInt, rts_mkChar
etc.

But I want to allow argument to be C structs.

data CStruct {
  i :: CInt,
  j :: CInt
}
type Fn = CStruct - IO ()
foreign import ccall wrapper f :: Fn - IO (FunPtr Fn)

Looks like it is impossible to instantiate CStruct from C function. Is
it true? Is it easy to add such functionality?

The only solution I see is to flatten CStruct before creating StablePtr:

f :: Fn - IO (FunPtr Fn)
f hsFunc = do
  sPtr - newStablePtr $ \i j - hsFunc (CStruct i j)
  createAdjustor sPtr staticWrapper ...

Does it make sense? It will add performance overhead because of
additional indirection. Better ideas are welcome.

Thanks,
Yuras

 
 -Edward
 
 
 On Fri, Mar 14, 2014 at 7:50 AM, Yuras Shumovich shumovi...@gmail.comwrote:
 
 
  Hi,
 
  Right now ghc's FFI doesn't support c/c++ structures.
 
  Whenever we have foreign function that accepts or returns struct by
  value, we have to create wrapper that accepts or returns pointer to
  struct. It is inconvenient, but actually not a big deal.
 
  But there is no easy workaround when you want to export haskell function
  to use it with c/c++ API that requires structures to be passed by value
  (Usually it is a callback in c/c++ API. You can't change it's signature,
  and if it doesn't provide some kind of void* userdata, then you are
  stuck.)
 
  I'm interested in fixing that. I'm going to start with 'foreign import
  wrapper ...' stuff.
 
  Calling conventions for passing c/c++ structures by value are pretty
  tricky and platform/compiler specific. So initially I'll use libffi for
  that (it will work when USE_LIBFFI_FOR_ADJUSTORS is defined, see
  rts/Adjustor.c). It will allow me to explore design space without
  bothering about low level implementation details. Later it could be
  implemented for native (non-libffi) adjustors.
 
  Is anybody interested it that? I appreciate any comments/ideas.
 
  Right now I don't have clear design. It would be nice to support plain
  haskell data types that are 1) not recursive, 2) has one constructor and
  3) contains only c/c++ types. But it doesn't work with c/c++ unions. Any
  ideas are welcome.
 
  An example how to use libffi with structures:
  http://www.atmark-techno.com/~yashi/libffi.html#Structures
 
  Thanks,
  Yuras
 
 
  ___
  ghc-devs mailing list
  ghc-devs@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs
 


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs