Re: PMC: addConCt and newtypes bottom info

2023-10-27 Thread Sebastian Graf

Hi Rodrigo,

Happy to see that you resumed your work on the pattern-match checker.
I think you are right; we could reasonably just go with your code, not 
least because it is less confusing to retain as much info about `x` as 
possible.
I don't think it makes a difference, because whenever we add a 
constraint `x ≁ ⊥` afterwards, we call `addNotBotCt` which will 
interpret this constraint as `y ≁ ⊥` via `lookupVarInfoNT`, and we have 
accurate BotInfo for `y`.
Basically whenever we have seen `x ~ T y`, `T` Newtype, we will never 
look at `BotInfo` of `x` again. I thought it might become relevant in 
`generateInhabitingPatterns` for warning messages, but there we eagerly 
instantiate through NTs anyway.


So by all means, open an MR for your change. Good work!

Sebastian

-- Originalnachricht --
Von: "Rodrigo Mesquita" 
An: "Sebastian Graf" 
Cc: "GHC developers" 
Gesendet: 27.10.2023 17:34:29
Betreff: PMC: addConCt and newtypes bottom info


Dear Sebastian and GHC devs,

Regarding this bit from the function addConCt in the 
GHC.HsToCore.Pmc.Solver module,


Nothing -> do
  let pos' = PACA alt tvs args : pos
  let nabla_with bot' =
nabla{ nabla_tm_st = ts{ts_facts = addToUSDFM env x 
(vi{vi_pos = pos', vi_bot = bot'})} }

  -- Do (2) in Note [Coverage checking Newtype matches]
  case (alt, args) of
(PmAltConLike (RealDataCon dc), [y]) | isNewDataCon dc ->
  case bot of
MaybeBot -> pure (nabla_with MaybeBot)
IsBot-> addBotCt (nabla_with MaybeBot) y
IsNotBot -> addNotBotCt (nabla_with MaybeBot) y
_ -> assert (isPmAltConMatchStrict alt )
 pure (nabla_with IsNotBot) -- strict match ==> not ⊥

My understanding is that given some x which we know e.g. cannot be 
bottom, if we learn that x ~ N y, where N is a newtype (NT), we move 
our knowledge of x not being bottom to the underlying NT Id y, since 
forcing the newtype in a pattern is equivalent to forcing the 
underlying NT Id.


Additionally, we set x’s BottomInfo to MaybeBot —
However, I don’t understand why we must reset x’s BotInfo to MaybeBot — 
couldn’t we keep it as it is while setting y’s BotInfo to the same 
info?
An example where resetting this info on the newtype-match is 
important/necessary would be excellent.


FWIW, I built and tested the PMC of ghc devel2 with

MaybeBot -> pure (nabla_with MaybeBot)
IsBot-> addBotCt (nabla_with IsBot) y
IsNotBot -> addNotBotCt (nabla_with IsNotBot) y

And it worked without warnings or errors…

Thanks in advance!
Rodrigo___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reinstallable - base

2023-10-20 Thread Sebastian Graf
Hi,

Thanks, Ben, that sounds very interesting. Allow me to provide the
following perspective.
It seems that those `*-internal` packages take the role of a static
library's symbol table in the absence of a fixed ABI:
It allows clients (such as `base` and `template-haskell`) to link against
fixed, internal implementations provided by GHC.
"Reinstallable" then just means that a library can be linked against
multiple different GHC versions providing the same set of internal APIs.

But perhaps this analogy is not all to useful, given the clash with the
traditional use of the term "linking" and "static library".

Cheers,
Sebastian

Am Fr., 20. Okt. 2023 um 12:35 Uhr schrieb Andrei Borzenkov <
andreyborzenkov2...@gmail.com>:

> >  the `List` type provided by `base-X.Y.Z` and `base-X.Y.Z'` may differ.
> This is not actually true, `List` will obviously live in `ghc-internal` and
> `ghc-internal` would be the same for `base-4.9` and for `base-4.7.1`. This
> raises the question about do we really need to depend on `base` in
> `template-haskell`? Perhaps we may be satisfied by small set of things that
> will contain `ghc-internal`?
> 20.10.2023 14:23, Ben Gamari writes:
>
> Viktor Dukhovni   writes:
>
>
> On Tue, Oct 17, 2023 at 04:54:41PM +0100, Adam Gundry wrote:
>
>
> Thanks for starting this discussion, it would be good to see progress in
> this direction. As it happens I was discussing this question with Ben and
> Matt over dinner last night, and unfortunately they explained to me that it
> is more difficult than I naively hoped, even once wired-in and known-key
> things are moved to ghc-internal.
>
> The difficulty is that, as a normal Haskell library, ghc itself will be
> compiled against a particular version of base. Then when Template Haskell is
> used (with the internal interpreter), code will be dynamically loaded into a
> process that already has symbols for ghc's version of base, which means it
> is not safe for the code to depend on a different version of base. This is
> rather like the situation with TH and cross-compilers.
>
> To avoid that problem, GHC's own dependency on "base" could be indirect
> via a shared object with versioned symbol names and a version-specific
> SONAME (possibly even a private to GHC SONAME and private symbol version
> names).  Say "libbase.so.4.19.1".
>
>
> The problem here is deeper than simply the symbol names. For instance,
> the `List` type provided by `base-X.Y.Z` and `base-X.Y.Z'` may differ.
> Since lists are used in the `template-haskell` AST, we would be unable
> to share lists between `template-haskell` and `ghc`.
>
> As noted in my recent reply elsewhere in the thread, this can be avoided
> by communicating via serialisation instead of heap objects.
>
> Cheers,
>
> - Ben
>
>
>
> ___
> ghc-devs mailing 
> listghc-devs@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RFC Or patterns syntax: (p1 | p2) vs. (p1; p2)

2023-07-24 Thread Sebastian Graf
Hi devs,

I would like to invite you to provide arguments for or against the Or
patterns syntax RFC `(p1; p2)` vs. `(p1 | p2)` over at this GH issue
.
*In particular, `(p1 | p2)` has a small lead over `(p1; p2)`*, but the
latter will steal syntax from a hypothetical guards-in-patterns extension
`(p | e)` as described here
.
I dismissed this point until Vlad made me aware

of the fact that `f (a -> b)` *could* mean "a pattern match on the type
constructor `(->)`" in a Dependent Haskell future.
Apparently, the existence of the hypothetical guards-in-patterns extension was
reason enough

to exclude view patterns from GHC2021.
Of course, `(p1; p2)` has issues of its own.
Given how close this vote is, I want to make sure that you, the GHC devs,
are aware of this issue and perhaps have a few minutes to formulate a
(counter) argument or just leave your vote before we submit the winner in
the actual amendment proposal, at which point it will be up to the GHC
steering committee to decide.

Thanks,
Sebastian
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Minutes of Sunday's meeting of the "Staged Working Group"

2023-06-13 Thread Sebastian Graf

Hi GHC devs,

On Sunday I realised that there were many different people around at 
ZuriHac that are very knowledgeable about staged metaprogramming and 
macro systems (outside GHC, even).
I really want a good staged metaprogramming story in Haskell (but don't 
know much about it or what I could contribute), and so I called everyone 
into a very spontaneous meeting, dubbed the "Staged Working Group".
The purpose of the whole meeting was rather nebulous (staged, even); the 
only goal for me was to throw involved people in one room to have a 
focussed discussion (rather than lumping together with a subset of the 
people and then dissolving in the hall way) and to talk about different 
efforts in the community.
In the end, I think we got a lot clearer picture about the challenges 
involved.


We are very fortunate that Ben has kept minutes with useful pointers: 
https://edit.smart-cactus.org/u__IGA1bTd2DpulYmlnxaw


Note that I don't intend to hold regular meetings or something of the 
sort; it was essentially a one time thing (but perhaps we'll have a 2.0 
meeting at next year's ZuriHac).
We loosely agreed to keep everyone posted on ongoing efforts in the 
direction of staged metaprogramming (and macros) by writing short status 
reports to this mailing list.


Thanks to everyone who is involved in improving (Typed) Template Haskell 
and who was there on Sunday!

Sebastian___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Trouble building GHC

2023-05-31 Thread Sebastian Graf
> I've had no issues with the configure step by running the configure_ghc
shell function that the flake provides.

For reference, this is the relevant ghc.nix issue:
https://github.com/alpmestan/ghc.nix/issues/111
It seems that the `configure_ghc` used to be a shell function in
https://github.com/alpmestan/ghc.nix/blob/b200a76a4f28d6434e4678827a0373002e641b12/default.nix#L156
.
Nowadays, it became a standalone bash script (great work, Magnus!), which
explains why it works for you:
https://github.com/alpmestan/ghc.nix/blob/f34c21877257fc37bbcf8962dc006024bfd0f946/ghc.nix#L138

You can't do `./configure $CONFIGURE_ARGS` in zsh directly, though.

Am Mi., 31. Mai 2023 um 09:49 Uhr schrieb Georgi Lyubenov <
godzbaneb...@gmail.com>:

> Just chiming in to mention that I'm on zsh, and I've had no issues with
> the configure step by running the configure_ghc shell function that the
> flake provides.
> On 5/31/23 10:13, Sebastian Graf wrote:
>
> Hi Lyle,
>
> I'm sorry that you have so much trouble in getting your first build done.
> The Classes.hi issue sounds like something I had experienced in the past,
> but I'm not having it at the moment.
> Are you also using symlinks by any chance? Then it is very likely that you
> have been bitten by https://gitlab.haskell.org/ghc/ghc/-/issues/22451,
> the workaround to which would be to do something like `cd "$(readlink -f
> .)"` before you start your build.
>
> Regarding your second issue using ghc.nix, a quick google turned up
> https://gitlab.haskell.org/ghc/ghc/-/issues/20429#note_379762.
> Is it possible that you didn't start from a clean build?
> E.g., at the least you should `rm -rf _build` (note that `hadrian/cabal
> clean` sadly is insufficient IIRC for reasons I don't recall).
> I often simply do `git clean -fxd` to be extra sure.
> After that, you'll have to boot, configure (including passing
> $CONFIGURE_ARGS) and build again.
> By the way, are you using ZSH? I'm using it and I have to pass the
> CONFIGURE_ARGS in a slightly different way
> <https://github.com/alpmestan/ghc.nix#building-ghc>: `./configure
> ${=CONFIGURE_ARGS}`.
>
> I also updated
> https://gitlab.haskell.org/ghc/ghc/-/wikis/building/preparation/linux#nixnixos
> to account for new-style flakified builds+direnv, if that's a workflow that
> you are familiar with.
>
> Hope that helps,
> Sebastian
>
> Am Mi., 31. Mai 2023 um 05:14 Uhr schrieb Lyle Kopnicky :
>
>> Hi folks, I’m new here. I’ll be attending the GHC Contributors’ Workshop
>> next week, and in preparation, I’m trying to build GHC, both the native
>> code backend and the JS backend. So far, I’ve only tried to build it with
>> the native code backend, but I haven’t been able to get it to work. I’ve
>> gotten help from some friendly folks on the #ghc channel on Matrix, and
>> made some progress, but I’m still stuck.
>>
>> Is there anyone here who could be a point person for helping me get it to
>> build? BTW I’m located on the west coast of the US (until next week when
>> I’ll be in Switzerland), so time lag may be a factor.
>>
>> I’m using a Mac with aarch64 and macOS 13.3. Here are some of the things
>> I’ve tried, and issues I’ve run into:
>>
>>
>>- Started with the advice from this wiki:
>>https://gitlab.haskell.org/ghc/ghc/-/wikis/building
>>- Checked out the ghc source, on HEAD.
>>- The rest of the steps were at
>>https://gitlab.haskell.org/ghc/ghc/-/wikis/building/preparation/mac-osx
>>- Already had Apple’s command line tools, but when I tried to do
>>operations using them, I got an error saying I needed the full Xcode, so I
>>installed that.
>>- brew install autoconf automake python sphinx-doc
>>   - Worked fine, also added sphinx-build to the path
>>- Initially tried using ghc 9.2.7 to build - later tried switching to
>>9.4.4 and eventually 9.4.5
>>- cabal update; cabal install alex happy haddock
>>- This is where I ran into trouble - couldn’t build haddock. Cabal
>>   said it couldn’t resolve the dependencies.
>>   - I tried switching to ghc 9.4.4 and cabal 3.10.1.0 - same problem
>>   - User romes (Rodrigo) on #ghc helped with this - was able to
>>   reproduce it and filed a ticket:
>>   https://github.com/haskell/haddock/issues/1596
>>   - However he pointed out that haddock was already supplied through
>>   ghcup so I don’t need to build it.
>>- Already had MacTex installed and I installed the DejaVu font family.
>>- ./boot && ./configure
>>   - Worked fine, although ./boot gave me lots of autoconf warnings
>>   like:
>>  

Re: Trouble building GHC

2023-05-31 Thread Sebastian Graf
Hi Lyle,

I'm sorry that you have so much trouble in getting your first build done.
The Classes.hi issue sounds like something I had experienced in the past,
but I'm not having it at the moment.
Are you also using symlinks by any chance? Then it is very likely that you
have been bitten by https://gitlab.haskell.org/ghc/ghc/-/issues/22451, the
workaround to which would be to do something like `cd "$(readlink -f .)"`
before you start your build.

Regarding your second issue using ghc.nix, a quick google turned up
https://gitlab.haskell.org/ghc/ghc/-/issues/20429#note_379762.
Is it possible that you didn't start from a clean build?
E.g., at the least you should `rm -rf _build` (note that `hadrian/cabal
clean` sadly is insufficient IIRC for reasons I don't recall).
I often simply do `git clean -fxd` to be extra sure.
After that, you'll have to boot, configure (including passing
$CONFIGURE_ARGS) and build again.
By the way, are you using ZSH? I'm using it and I have to pass the
CONFIGURE_ARGS in a slightly different way
: `./configure
${=CONFIGURE_ARGS}`.

I also updated
https://gitlab.haskell.org/ghc/ghc/-/wikis/building/preparation/linux#nixnixos
to account for new-style flakified builds+direnv, if that's a workflow that
you are familiar with.

Hope that helps,
Sebastian

Am Mi., 31. Mai 2023 um 05:14 Uhr schrieb Lyle Kopnicky :

> Hi folks, I’m new here. I’ll be attending the GHC Contributors’ Workshop
> next week, and in preparation, I’m trying to build GHC, both the native
> code backend and the JS backend. So far, I’ve only tried to build it with
> the native code backend, but I haven’t been able to get it to work. I’ve
> gotten help from some friendly folks on the #ghc channel on Matrix, and
> made some progress, but I’m still stuck.
>
> Is there anyone here who could be a point person for helping me get it to
> build? BTW I’m located on the west coast of the US (until next week when
> I’ll be in Switzerland), so time lag may be a factor.
>
> I’m using a Mac with aarch64 and macOS 13.3. Here are some of the things
> I’ve tried, and issues I’ve run into:
>
>
>- Started with the advice from this wiki:
>https://gitlab.haskell.org/ghc/ghc/-/wikis/building
>- Checked out the ghc source, on HEAD.
>- The rest of the steps were at
>https://gitlab.haskell.org/ghc/ghc/-/wikis/building/preparation/mac-osx
>- Already had Apple’s command line tools, but when I tried to do
>operations using them, I got an error saying I needed the full Xcode, so I
>installed that.
>- brew install autoconf automake python sphinx-doc
>   - Worked fine, also added sphinx-build to the path
>- Initially tried using ghc 9.2.7 to build - later tried switching to
>9.4.4 and eventually 9.4.5
>- cabal update; cabal install alex happy haddock
>- This is where I ran into trouble - couldn’t build haddock. Cabal
>   said it couldn’t resolve the dependencies.
>   - I tried switching to ghc 9.4.4 and cabal 3.10.1.0 - same problem
>   - User romes (Rodrigo) on #ghc helped with this - was able to
>   reproduce it and filed a ticket:
>   https://github.com/haskell/haddock/issues/1596
>   - However he pointed out that haddock was already supplied through
>   ghcup so I don’t need to build it.
>- Already had MacTex installed and I installed the DejaVu font family.
>- ./boot && ./configure
>   - Worked fine, although ./boot gave me lots of autoconf warnings
>   like:
>   configure.ac:9: warning: The macro `AC_HELP_STRING' is obsolete.
>   configure.ac:9: You should run autoupdate.
>   - Apparently autoconf has renamed these macros to pluralize them,
>   and also encloses the arguments in square brackets.
>- hadrian/build
>   - Ran into another problem: The build failed with:
>   Error, file does not exist and no rule available:
>
>   
> /Users/lyle/devel/haskell/ghc/_build/stage1/libraries/ghc-prim/build/GHC/Classes.hi
>   - Rodrigo helped me out with this. Introduced me to other build
>   flags like -j —flavour-quick.
>   - The issue proved quite persistent, but it might fail on different
>   .hi files.
>   - It turns out if you ask Hadrian to build that specific file, it
>   works. So somehow it can find a rule! Then you can proceed to rebuild 
> and
>   it will fail on a different .hi file. Obviously it would be tedious
>   to do this for all the possible files it can fail on.
>   - I also tried not building profiled libraries.
>   - I tried, instead of using the Apple toolchain, using llvm 12,
>   then llvm 16. But I got different errors from that.
>   - Rodrigo was unable to reproduce the issue.
>- So, I thought I’d try the Nix approach.
>- First I had to repair my nix installation, because apparently
>updating macOS overwrites /etc/zshrc, overwriting the bit that sources
>

Re[2]: Help! Can't build HEAD

2023-03-15 Thread Sebastian Graf

Hi Simon,

I had that very issue a few days ago, but saw this thread too late.
For me it was enough to cd into utils/hpc and do a `git checkout .`.

Sebastian

-- Originalnachricht --
Von: "Simon Peyton Jones" 
An: "Sam Derbyshire" 
Cc: "Sam Derbyshire" ; "GHC developers" 


Gesendet: 15.03.2023 13:57:03
Betreff: Re: Help! Can't build HEAD


Ah.

rm utils/hpc
git submodule update

does the job.  Who would have guessed that?  Maybe this thread will 
help others.


Could hadrian have avoided this, perhaps?

Anyway, I'm rolling again, thanks

Simon

On Wed, 15 Mar 2023 at 12:54, Simon Peyton Jones 
 wrote:

Oh I didn't know you had to say "--init".  Anyway that fails

git submodule update --init
Submodule 'utils/hpc' (https://gitlab.haskell.org/hpc/hpc-bin.git) 
registered for path 'utils/hpc'
fatal: destination path '/home/simonpj/code/HEAD-15/utils/hpc' already 
exists and is not an empty directory.
fatal: clone of 'https://gitlab.haskell.org/hpc/hpc-bin.git' into 
submodule path '/home/simonpj/code/HEAD-15/utils/hpc' failed

Failed to clone 'utils/hpc'. Retry scheduled
fatal: destination path '/home/simonpj/code/HEAD-15/utils/hpc' already 
exists and is not an empty directory.
fatal: clone of 'https://gitlab.haskell.org/hpc/hpc-bin.git' into 
submodule path '/home/simonpj/code/HEAD-15/utils/hpc' failed

Failed to clone 'utils/hpc' a second time, aborting
simonpj@LHR-WD-22561:~/code/HEAD-15$

Simon

On Wed, 15 Mar 2023 at 12:46, Sam Derbyshire 
 wrote:

Perhaps even `git submodule update --init`.

On Wed, 15 Mar 2023 at 13:41, Matthew Pickering 
 wrote:

You need to run `git submodule update` I think.

On Wed, Mar 15, 2023 at 12:36 PM Simon Peyton Jones
 wrote:
>
> Aargh!  I can't build HEAD!
>
> I get this:
> ./hadrian/build
> Up to date
> ]0;Starting... ]0;Finished in 0.04s Error, file does not exist and 
no rule available:

>   utils/hpc/hpc-bin.cabal
> Build failed.
>
>
> This is after
> hadrian/build clean
> ./boot
> ./configure
>
> I'm very stalled.  All my trees are borked.  Can anyone help?
>
> Simon
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Status of Stream Fusion?

2022-11-14 Thread Sebastian Graf
Yes, if you confine yourself to Miller's "pattern fragment" (Meta variables
may appear in the head of an application, but then the arguments may only
be *distinct* bound variables) then you might be fine.

Thanks for giving it a try!


Am Mo., 14. Nov. 2022 um 14:27 Uhr schrieb J. Reinders <
jaro.reind...@gmail.com>:

> I think higher order pattern unification is different from higher order
> matching because the unification means both sides are allowed to be
> patterns that may contain meta variables, while matching works on only one
> pattern and one concrete term.
>
> The example pattern ‘forall a. someFunction (\x -> a x x)’ would indeed be
> a potential problem, but indeed I think disallowing that is fine because it
> is never necessary to write such patterns in practice.
>
> The paper linked by Simon contains another example: ‘forall f x. f x’
> matched to ‘0’, which is also a problem because it has infinitely many
> possible solutions, but I would consider this out of scope because it
> involves an application of two unification variables. For the practical
> examples I’ve seen up to now, we only need to handle the application of one
> unification variable with one locally bound variable.
>
> I’m pretty optimistic now that it is possible to implement an algorithm
> that covers the practical use cases (also the Simon’s ‘mapFB' example falls
> into this category) while being much simpler than existing algorithms for
> both higher order unification and higher order matching.
>
> I’ll do some experiments with the rule matcher.
>
> Cheers,
> Jaro
>
> > On 14 Nov 2022, at 14:03, Sebastian Graf  wrote:
> >
> > > I believe the reason that this is easier than higher order matching in
> general because it is restricted to applications of unification variables
> to locally bound variables.
> >
> > Indeed, it is easier to confine oneself to the pattern fragment. I think
> that's entirely the point of pattern unification: Meta variables may appear
> in the head of an application, but then the arguments may only be distinct
> bound variables. E.g., `α x x` would be disallowed.
> >
> > Perhaps it is possible to tweak RULEs just enough to match
> >
> >   forall f next. concatMap (λx → Stream next (f x)) = concatMap' next f
> >
> > (Now that I re-read your RULE, the explicit app `f x` makes sense
> compared to the absence of an app in `next` to indicate it must not mention
> `x`. I completely overread it before.)
> >
> > If `f x` should match `x*2+x` with `f := \x -> 2*x+x` then that means
> RULE matching modulo beta reductions, for a start.
> > Do we also want eta? I think we do, because we are already matching
> modulo eta since https://gitlab.haskell.org/ghc/ghc/-/merge_requests/6222
> (and are very careful not to increase arity of target term in doing so).
> >
> > There are many other troubling issues with pattern unification stemming
> from meta variables on both sides of the equation, but perhaps they don't
> come up because the target term does not contain meta variables. Solving a
> meta variable simply does `eqExpr` with solutions from other use sites, as
> today. It doesn't matter much that we are `eq`ing lambdas, because that's
> pretty simple to do.
> > I can't think of an obvious example where higher-order pattern
> *matching* (urgh, what an unfortunate intersection of meaning between
> "higher-order pattern unification" and "pattern matching") would be much
> more complicated, but I'm no expert in the field.
> >
> > I didn't know the paper Simon cited, but note that Miller's higher-order
> pattern unification (which has become bog standard in dependently-typed
> langs) identifies a useful subset that is also efficient to check. It's
> surprising that the authors do not compare the pattern fragment to the
> identified fragment of their implementation (although they cite it in one
> place as [16]).
> >
> > Anyway, unless someone comes up with a tricky counter-example for a RULE
> that would be complicated to check, you should probably just give it a try.
> >
> > Cheers,
> > Sebastian
> >
> >
> > Am Mo., 14. Nov. 2022 um 12:32 Uhr schrieb J. Reinders <
> jaro.reind...@gmail.com>:
> > Thank you both for the quick responses.
> >
> > > Can you say precisely what you mean by "using stream fusion instead of
> foldr/build fusion in base"?   For example, do you have a prototype library
> that demonstrates what you intend, all except concatMap?
> >
> >
> > I believe the stream-fusion library [1] and in particular the
> Data.List.Stream module [2] implements a version of Data.List from base
>

Re: Status of Stream Fusion?

2022-11-14 Thread Sebastian Graf
> I believe the reason that this is easier than higher order matching in
general because it is restricted to applications of unification variables
to locally bound variables.

Indeed, it is easier to confine oneself to the pattern fragment. I think
that's entirely the point of pattern unification: Meta variables may appear
in the head of an application, but then the arguments may only be distinct
bound variables. E.g., `α x x` would be disallowed.

Perhaps it is possible to tweak RULEs just enough to match

  forall f next. concatMap (λx → Stream next (f x)) = concatMap' next f

(Now that I re-read your RULE, the explicit app `f x` makes sense compared
to the absence of an app in `next` to indicate it must not mention `x`. I
completely overread it before.)

If `f x` should match `x*2+x` with `f := \x -> 2*x+x` then that means RULE
matching modulo beta reductions, for a start.
Do we also want eta? I think we do, because we are already matching modulo
eta since https://gitlab.haskell.org/ghc/ghc/-/merge_requests/6222 (and are
very careful not to increase arity of target term in doing so).

There are many other troubling issues with pattern unification stemming
from meta variables on both sides of the equation, but perhaps they don't
come up because the target term does not contain meta variables. Solving a
meta variable simply does `eqExpr` with solutions from other use sites, as
today. It doesn't matter much that we are `eq`ing lambdas, because that's
pretty simple to do.
I can't think of an obvious example where higher-order pattern *matching*
(urgh, what an unfortunate intersection of meaning between "higher-order
pattern unification" and "pattern matching") would be much more
complicated, but I'm no expert in the field.

I didn't know the paper Simon cited, but note that Miller's higher-order
pattern unification (which has become bog standard in dependently-typed
langs) identifies a useful subset that is also efficient to check. It's
surprising that the authors do not compare the pattern fragment to the
identified fragment of their implementation (although they cite it in one
place as [16]).

Anyway, unless someone comes up with a tricky counter-example for a RULE
that would be complicated to check, you should probably just give it a try.

Cheers,
Sebastian


Am Mo., 14. Nov. 2022 um 12:32 Uhr schrieb J. Reinders <
jaro.reind...@gmail.com>:

> Thank you both for the quick responses.
>
> > Can you say precisely what you mean by "using stream fusion instead of
> foldr/build fusion in base"?   For example, do you have a prototype library
> that demonstrates what you intend, all except concatMap?
>
>
> I believe the stream-fusion library [1] and in particular the
> Data.List.Stream module [2] implements a version of Data.List from base
> with stream fusion instead of foldr/build fusion. It is pretty old at this
> point, so it may not completely match up with the current Data.List module
> any more and it doesn’t use skip-less stream fusion yet.
>
> I now also see that #915 [3] tracks the exact issue of replacing
> foldr/build fusion with stream fusion in base, but is closed because it
> required more research at the time. And in that thread, Sebastian already
> asked the same question as me 5 years ago [4]:
>
> > At least this could get rid of the concatMap roadblock, are there any
> others I'm not aware of?
>
> [1] https://hackage.haskell.org/package/stream-fusion
> [2]
> https://hackage.haskell.org/package/stream-fusion-0.1.2.5/docs/Data-List-Stream.html
> [3] https://gitlab.haskell.org/ghc/ghc/-/issues/915
> [4] https://gitlab.haskell.org/ghc/ghc/-/issues/915#note_141373
>
> > But what about
> >
> > concatMap (\x. Stream next (x*2 +x))
> >
> > Then you want matching to succeed, with the substitution
> > f :->  (\p. p*2 +p)
> >
> > This is called "higher order matching" and is pretty tricky.
>
>
> First of all, I’d like to clarify I’ll write ‘unification variable’, but
> that might be the wrong term. What I mean is a variable that is bound by a
> ‘forall’ in a rewrite rule. So in the example program, I’ll call ‘f’ and
> ’next’ unification variables and ‘x’ a local variable.
>
> Higher order matching sounds like it makes the problem too general
> (although I’ll admit I haven’t looked into it fully). For this matching I
> feel like the core is to change the free variable ‘x’ in the expression
> ‘(x*2 +x)’ into a bound variable to get ‘\p -> p*2 +p’. That part sounds
> very easy. The only problem that remains is when to actually perform this
> change. I would suggest that this should happen when a unification variable
> ‘f’ is applied to a locally bound variable ‘x’. The local variable ‘x’
> should only be allowed to occur in unification variables that are applied
> to it.
>
> And indeed this seems to be what Sebastian suggests:
>
> > perhaps you could prevent that by saying that `x` may not occur freely
> in `next` or `f`, but the paper explicitly *wants* `x` to occur in `next`
>
>
> The paper explicitly 

Re: Status of Stream Fusion?

2022-11-14 Thread Sebastian Graf
Hi Jaro,

I'm very glad that you are interested in picking up the pieces I left
behind!

Re: SpecConstr: Yes, that pass is already moderately complicated and gets
much more complicated when you start specialisation for non-bound lambdas,
because that would need higher-order pattern unification in RULEs to be
useful (as well as SpecConstr applying those RULEs when specialising). One
smart suggestion by Simon to prevent that was to only specialise on bound
lambdas only and do a pass before that assigns a name to every lambda. I'm
not sure if that is enough for the recursive specialisation problem arising
in stream fusion, though. I simply haven't played it through so far.

Re: Static argument transformation: I find that much more promising indeed.
Not the pass that transforms the RHS of a binding, but a my new idea that
evaluates in the Simplifier for a recursive function whether it makes sense
to inline its on-the-fly SAT'd form. See
https://gitlab.haskell.org/ghc/ghc/-/issues/18962 and the prototype
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4553 for details. It
does just fine on concatMap stream fusion pipelines (at least when the
stepper function is small enough to inline), although I remember there are
a few annoying issues regarding SAT'ing stable unfoldings (think INLINE
recursive functions). Just thinking about it makes me excited again, but
I've to finish other stuff in my PhD first.

Re: Beefing up rewrite rules: I *think* the RULE you suggest amounts to
implementing pattern unification in RULEs (perhaps you could prevent that
by saying that `x` may not occur freely in `next` or `f`, but the paper
explicitly *wants* `x` to occur in `next`). I'd find that cool, but I'm a
bit wary that the RULE matcher (which I'm not very familiar with) might
behave subtly different in certain key scenarios than vanilla pattern
unification and we might get breaking changes as a result.
At the moment, RULEs matching only ever matches a term against a pattern,
where the former has no "unification variables", so it might be simpler
than full-blown pattern unification.

So in short, the problem was never that we couldn't write down the RULE,
but that it's hard to implement in GHC.

I can't really answer (1) or (2), but perhaps my summary above is useful to
you.

Sebastian

Am Mo., 14. Nov. 2022 um 10:47 Uhr schrieb J. Reinders <
jaro.reind...@gmail.com>:

> Dear GHC devs,
>
> I’m interested in stream fusion and would like to see what it takes to fix
> the remaining issues, so that it can replace foldr/build fusion in base.
>
> First of all I would like to know what exactly the challenges are that are
> left. I believe one of the main remaining problems is the fusion of
> ‘concatMap’. Is that really the only thing?
>
> Secondly, I would like to know what has already been tried. I know
> Sebastian Graf has spent a lot of effort trying to get SpecConstr to work
> on lambda arguments without success. I’ve read that Sebastian now considers
> the static argument transformation more promising.
>
> However, Duncan Coutts proposed in his thesis to make rewrite rules
> slightly more powerful and use the rewrite rule:
>
> concatMap (λx → Stream next (f x)) = concatMap' next f
>
> Has that ever been tried? If so, what is the problem with this rewrite
> rule approach? I can understand that the `f x` function application is
> usually in a more reduced form, but it seems relatively easy to make the
> rewrite rule matcher smart enough to see through beta-reductions like that.
>
> So my main questions are:
>
> 1. Is the ‘concatMap’ problem really the only problem left on the way to
> using stream fusion instead of foldr/build fusion in base?
>
> 2. Has the rewrite rule approach to solving the ‘concatMap’ problem ever
> been tried?
>
> Any other information about the current status of stream fusion is also
> much appreciated.
>
> Cheers,
>
> Jaro Reinders
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pushing to nofib

2022-09-27 Thread Sebastian Graf

Hi Simon,

Similar to the policy for the main GHC repo, you have to push to a wip/ 
branch (or your personal fork) and then open a MR against the NoFib 
repo.
Fortunately, the quality requirements for NoFib aren't high and it's 
likely your MR can be merged instantly.


Cheers,
Sebastian

-- Originalnachricht --
Von: "Simon Peyton Jones" 
An: "GHC developers" 
Gesendet: 27.09.2022 09:49:53
Betreff: Pushing to nofib


Friends

How do I push to the nofib/ repository?  I just wanted to add some 
documentation. I'm on branch 'master' and tried to push.  Got this 
below.  What next?


Thanks

Simon

bash$ git push
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 24 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 543 bytes | 543.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
remote: GitLab: You are not allowed to push code to protected branches 
on this project.

To gitlab.haskell.org:ghc/nofib.git
 ! [remote rejected] master -> master (pre-receive hook declined)
error: failed to push some refs to 
'g...@gitlab.haskell.org:ghc/nofib.git'

simonpj@LHR-WD-22561:~/code/HEAD-1/nofib$___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ambiguous record field (but not *that* kind of ambiguous record field)

2022-05-16 Thread Sebastian Graf

Hi Richard,

I'm not sure if I'm missing something, but my adolescent naivety in 
frontend matters would try to reach for
https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0155-type-lambda.rst#motivation 
and write


  MkRec { field = \@a -> ... }

and I hope that will do the right thing. Indeed, I interpret your 
proposed `field @a = ...` as much the same.


Sebastian


-- Originalnachricht --
Von: "Richard Eisenberg" 
An: "Erdi, Gergo via ghc-devs" 
Gesendet: 16.05.2022 21:09:33
Betreff: ambiguous record field (but not *that* kind of ambiguous record 
field)



Hi all,

On a project I'm working on, I wish to declare something like

data Rec = MkRec { field :: forall a. SomeConstraint a => ... }

where the ... contains no mention of `a`.

Even with https://github.com/ghc-proposals/ghc-proposals/pull/448, I 
think there is no way to avoid the ambiguity when setting `field`. Is 
that correct? If so, what shall we do about it? The natural answer is 
somehow to write ... MkRec { field @a = ... } ... but that would break 
significant new syntactic ground. (Maybe it's good new syntactic 
ground, but it would still be very new.)


Thanks,
Richard___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Markup language/convention for Notes?

2022-04-13 Thread Sebastian Graf

Hi Devs,

When writing Notes, I find myself using markdown-inspired or 
haddock-inspired features. The reason is that I keep telling myself


> In 5 years time, we'll surely have an automated tool that renders 
Notes referenced under the cursor in a popup in our IDE


And I might not be completely wrong about that, after all the strong 
conventions about Note declaration syntax allow me to do 
jump-to-definition on Note links in my IDE already (thanks to a shell 
script written by Zubin!).
Still, over the years I kept drifting between markdown and haddock 
syntax, sometimes used `backticked inline code` or haddock 'ticks' to 
refer to functions in the compiler (sometimes even 
'GHC.Fully.Qualified.ticks') and for code blocks I used all of the 
following forms:


Haddock "code quote"

> id :: a -> a
> id x = x

Markdown triple backticks

```hs
id :: a -> a
id x = x
```

Indentation by spaces

  id :: a -> a
  id x = x

And so on.

I know that at least Simon was thrown off in the past about my use of 
"tool-aware markup", perhaps also because I kept switching the targetted 
tool. I don't like that either. So I wonder
Do you think it is worth optimising Notes for post-processing by an 
external tool?I think it's only reasonable if we decide for a target 
syntax. Which syntax should it be?

Cheers,
Sebastian___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Release Updates - 9.4.1 and 9.2.3

2022-04-06 Thread Sebastian Graf
Hi Matthew,

Depending on whether https://gitlab.haskell.org/ghc/ghc/-/issues/21229 is
deemed a blocker for 9.4 (I'd say it is, but YMMV), we should include
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/7788 in the list.
Perhaps we should make it dependent on whether !7788 is ready to merge by
May or whether it ends up as the only patch that holds back the release.

Sebastian

Am Mi., 6. Apr. 2022 um 11:00 Uhr schrieb Matthew Pickering <
matthewtpicker...@gmail.com>:

> Hi all,
>
> We have now forked the 9.4 branch.
>
> There are a few outstanding patches which have not yet been finished
> but which are essential to the release.
>
> * (#21019) Windows Toolchain Updates - Ben
> * (#20405) Partial Register Stall - Ben/Andreas
> * (!7812) Syntactic Unification - Sam
>
> The target date for the first alpha for 9.4.1 is 1st May.
>
> We have started preparing the 9.2.3 release in order to fix issues
> discovered in the 9.2.2 release. I will post separately to the list
> shortly when the schedule for this release is confirmed. The release
> manager for this release is Zubin.
>
> Cheers,
>
> Matt
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re[2]: Avoiding `OtherCon []` unfoldings, restoring definitions from unfoldings

2022-04-05 Thread Sebastian Graf
Top-level data structures tend to get OtherCon [] unfoldings when they 
are marked NOINLINE.


KindRep bindings are one particular example, and they appear quite 
often, too.


Why are KindReps are NOINLINE? Because (from Note [Grand plan for 
Typeable])


  The KindReps can unfortunately get quite large. Moreover, the 
simplifier will
  float out various pieces of them, resulting in numerous top-level 
bindings.
  Consequently we mark the KindRep bindings as noinline, ensuring that 
the
  float-outs don't make it into the interface file. This is important 
since

  there is generally little benefit to inlining KindReps and they would
  otherwise strongly affect compiler performance.

But perhaps it's not top-level *data structures* without unfoldings that 
Gergő worries about.


Sebastian

-- Originalnachricht --
Von: "Ben Gamari" 
An: "Simon Peyton Jones" ; "ÉRDI Gergő" 


Cc: "GHC Devs" ; clash-langu...@googlegroups.com
Gesendet: 05.04.2022 15:53:02
Betreff: Re: Avoiding `OtherCon []` unfoldings, restoring definitions 
from unfoldings



Simon Peyton Jones  writes:


 I don't think any top-level Ids should have OtherCon [] unfoldings?  If
 they do, can you give a repro case?  OtherCon [] unfoldings usually mean "I
 know this variable is evaluated, but I don't know what its value is.  E.g
data T = MkT !a !a
   f (MkT x y) = ...

 here x and y have OtherCon [] unfoldings. They are definitely not bottom!


Is there a reason why we wouldn't potentially give a static data
constructor application an OtherCon [] unfolding? I would guess that
usually these are small enough to have a CoreUnfolding, but in cases
where the expression is too large to have an unstable unfolding we might
rather want to give it an OtherCon [].

Cheers,

- Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re[2]: How to exploit ./hadrian/ghci to find errors quickly?

2022-01-29 Thread Sebastian Graf

Great! Glad I could help.

FWIW, if I have strange HLS bugs, I mostly restart it (if it had worked 
before) or delete .hie-bios, where HLS stores its build results.
HLS builds the same stuff as what hadrian/ghci needs to build. The 
former puts it in .hie-bios, the latter in ... .hadrian-ghci? Not sure. 
Anyway, if HLS behaves strangely, try deleting its build root.


Sebastian


-- Originalnachricht --
Von: "Norman Ramsey" 
An: "Sam Derbyshire" ; ghc-devs@haskell.org
Gesendet: 29.01.2022 00:34:23
Betreff: Re: How to exploit ./hadrian/ghci to find errors quickly?


 > The Binary runGet issue usually means that your build tree is out of date.
 > It's probably worth deleting and building from scratch again.

Brilliant!  Definitely working much better!

And @Sebastian I start to see the productivity gains.  Remove all
redundant imports with one click!


Norman
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How to exploit ./hadrian/ghci to find errors quickly?

2022-01-28 Thread Sebastian Graf

This is the typical use case for a language server.
I have haskell-language-server installed and use it extensively on GHC 
for stuff like jump to definition and immediate compilation feedback.

There's also "jump to next error" if you want that.

Installation was pretty trivial for me, just flipping this switch in my 
local ghc.nix clone: 
https://github.com/alpmestan/ghc.nix/blob/88acad6229300d8917ad226a64de80aa491ffa07/default.nix#L19

You will probably have to install an LSP plugin for emacs first.

The hour or so I invested on initial setup has probably saved me several 
days already.


Sebastian

-- Originalnachricht --
Von: "Norman Ramsey" 
An: ghc-devs@haskell.org
Gesendet: 28.01.2022 17:52:00
Betreff: How to exploit ./hadrian/ghci to find errors quickly?


 > My recommendation: ./hadrian/ghci.

Richard, this suggestion has been so useful that I would like to
follow it up.

I'm about to change a type definition.  This change may wreak havoc in
many parts of GHC, and this is exactly what I want: I'm looking for
type-error messages that will tell me what code I need to fix.
I do this work in emacs using `M-x compile` and `next-error`.  The key
property is that `next-error` requires just a couple of keystrokes
(C-x `) and it makes Emacs take me to the relevant source-code
location quickly.

But telling `M-x compile` to run `./hadrian/build` is quite slow.  Is
there a way to leverage `./hadrian/ghci` for this workflow?


Norman
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re[4]: Strictness/demand info for a Name

2022-01-13 Thread Sebastian Graf
Yes, every imported identifier from a module that was compiled with 
optimisations should have the proper analysis information in its IdInfo.
So if you have `base` somewhere in one of your compiled libraries and 
load it, then the identifier of `GHC.OldList.map` will have an 
`idDmdSig` that says it's strict in the list parameter.


At least that's how it works in GHC, where these IdInfos are populated 
with the information from the loaded interface files. The situation with 
HLS might be different, although I wouldn't expect that.


-- Originalnachricht --
Von: "Alejandro Serrano Mena" 
An: "Sebastian Graf" 
Cc: "GHC developers" ; "Matthew Pickering" 


Gesendet: 13.01.2022 16:43:50
Betreff: Re: Re[2]: Strictness/demand info for a Name


Thanks for the pointers! :)

Knowing this, let me maybe rephrase my question: is it possible to get 
demand information of identifiers without running the analysis itself?


Alejandro

El 13 ene 2022 15:45:29, Sebastian Graf  escribió:

Yes, Matt is right.

`dmdSigInfo` describes the how a function Id uses its arguments and 
free

variables, whereas
`demandInfo` describes how a (local, mostly) Id is used.

Note that if you wanted to go beyond type-checking, you could probably
run the analysis on the desugaring of the current module quite easily.
But the results would be misleading, as prior optimisations (that you
probably don't want to run) may arrange the program in a way that 
demand

analysis has an easier time.

-- Originalnachricht --
Von: "Matthew Pickering" 
An: "Alejandro Serrano Mena" 
Cc: "GHC developers" 
Gesendet: 13.01.2022 15:38:29
Betreff: Re: Strictness/demand info for a Name


You look at `dmdSigInfo` in `IdInfo`.

Matt

On Thu, Jan 13, 2022 at 2:20 PM Alejandro Serrano Mena
 wrote:
>
>  Dear all,
>
>  I’m trying to bring the information about demand and strictness to 
the Haskell Language Server, but I cannot find a way to do so. I was 
wondering whether you could help me :)

>
>  Here’s my understanding; please correct me if I’m wrong:
>
>  The analysis runs on Core, so getting this information for the 
current file would require to run the compiler further than type 
checking, which is quite expensive,
>  However, this analysis should somehow use known information about 
imported functions, which should be readily available somewhere,
>  If the above is true, what is the simplest way to get the 
information for imported things? As I mentioned above, I would prefer 
not to run the compiler further than the type checking phase, since 
otherwise it gets too expensive for IDE usage. Right now HLS uses the 
information from the .hie files.

>
>
>  In fact, this goes into the more general question of how to show 
information from different analyses within the IDE; I guess solving 
the case for strictness/analysis may open the door to more (maybe 
everything recorded inside a `Id`?)

>
>  Regards,
>  Alejandro
>  ___
>  ghc-devs mailing list
>ghc-devs@haskell.org
>http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re[2]: Strictness/demand info for a Name

2022-01-13 Thread Sebastian Graf

Yes, Matt is right.

`dmdSigInfo` describes the how a function Id uses its arguments and free 
variables, whereas

`demandInfo` describes how a (local, mostly) Id is used.

Note that if you wanted to go beyond type-checking, you could probably 
run the analysis on the desugaring of the current module quite easily.
But the results would be misleading, as prior optimisations (that you 
probably don't want to run) may arrange the program in a way that demand 
analysis has an easier time.


-- Originalnachricht --
Von: "Matthew Pickering" 
An: "Alejandro Serrano Mena" 
Cc: "GHC developers" 
Gesendet: 13.01.2022 15:38:29
Betreff: Re: Strictness/demand info for a Name


You look at `dmdSigInfo` in `IdInfo`.

Matt

On Thu, Jan 13, 2022 at 2:20 PM Alejandro Serrano Mena
 wrote:


 Dear all,

 I’m trying to bring the information about demand and strictness to the Haskell 
Language Server, but I cannot find a way to do so. I was wondering whether you 
could help me :)

 Here’s my understanding; please correct me if I’m wrong:

 The analysis runs on Core, so getting this information for the current file 
would require to run the compiler further than type checking, which is quite 
expensive,
 However, this analysis should somehow use known information about imported 
functions, which should be readily available somewhere,
 If the above is true, what is the simplest way to get the information for 
imported things? As I mentioned above, I would prefer not to run the compiler 
further than the type checking phase, since otherwise it gets too expensive for 
IDE usage. Right now HLS uses the information from the .hie files.


 In fact, this goes into the more general question of how to show information 
from different analyses within the IDE; I guess solving the case for 
strictness/analysis may open the door to more (maybe everything recorded inside 
a `Id`?)

 Regards,
 Alejandro
 ___
 ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re[4]: Transparently hooking into the STG stack to validate an escape analysis

2021-12-17 Thread Sebastian Graf

(Moving the conversation off the list.)

-- Originalnachricht --
Von: "Csaba Hruska" 
An: "Sebastian Graf" 
Cc: "ghc-devs" ; "Sebastian Scheper" 


Gesendet: 16.12.2021 18:22:30
Betreff: Re: Re[2]: Transparently hooking into the STG stack to validate 
an escape analysis



Thanks for the feedback!
Let's have a video meeting. My schedule is flexible. What time is best 
for you?


Cheers,
Csaba

On Thu, Dec 16, 2021 at 5:29 PM Sebastian Graf  
wrote:

Hey Csaba,

After catching up on your talk and reflecting about it a bit, it seems 
quite obvious that your tool is the right way to collect the data and 
validate our analysis!
Even if meanwhile we decided that a "transparent stack frame" (which I 
believe is something similar to what you are doing here 
<https://github.com/grin-compiler/ghc-whole-program-compiler-project/blob/d3ed07d9f3167ad1afa01d1aa95aec2472b2708f/external-stg-interpreter/lib/Stg/Interpreter.hs#L130>, 
with an explicit `argCount` which we do not know) is not the ideal 
solution we've been looking for (for different reasons).


Essentially, we have two things
An escape analysis, implemented as an STG-to-STG pass that attaches a 
boolean flag to each Id whether it "escapes its scope" (for a suitable 
definition of that).
We'd like to implement it in a way that it would be reusable within 
GHC with moderate effort (e.g., renaming Binder to Id or accounting 
for different fields), operating on a module at a time rather than 
whole program.The instrumentation that tries to measure how many heap 
objects could be allocated on the stack. E.g., set a closure-specific 
flag whenever the closure is entered, unset that bit (once) when we 
"leave the scope" that defines the closure.
If my understanding is right, we could just implement this 
"instrumentation" as a simple extra field to Closure 
<https://github.com/grin-compiler/ghc-whole-program-compiler-project/blob/d3ed07d9f3167ad1afa01d1aa95aec2472b2708f/external-stg-interpreter/lib/Stg/Interpreter/Base.hs#L135>, 
right? Neat!
A bit tangential: I see that your interpreter currently allocates a 
fresh closure for let-no-escapes 
<https://github.com/grin-compiler/ghc-whole-program-compiler-project/blob/d3ed07d9f3167ad1afa01d1aa95aec2472b2708f/external-stg-interpreter/lib/Stg/Interpreter.hs#L606> 
when it could just re-use the closure of its defining scope. That 
would skew our numbers somewhat compared to instrumenting GHC-compiled 
programs, but I think we'd be able to work around that. I also wonder 
if the semantics of your let-no-escapes are actually as typically 
specified (operationally), in that a jump to a let-no-escape should 
also reset the stack pointer. It should hardly matter for the programs 
that GHC generates, though.


I would also be interested in knowing whether the +RTS -s "bytes 
allocated in the heap" metric (very nearly) coincides with a similar 
metric you could produce. It would be fantastic if that was the case! 
Theoretically, that should be possible, right?


I think your interpreter work is very valuable to collect data we 
otherwise would only be able to measure with a TickyTicky-based 
approach. Nice!
Another, similar use case would be to identify the fraction of 
closures that are only entered once. I remember that there was a 
ticky-based patch with which Joachim used to measure this fraction 
<https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/demand#instrumentation> 
(and similarly validate the analysis results), but unfortunately it 
couldn't end up in master. Ah, yes, we have a whole open ticket about 
it: #10613 <https://gitlab.haskell.org/ghc/ghc/-/issues/10613>. In 
fact, that instrumentation is also somewhat similar (adding a field to 
every closure) as what we want to do.


Anyway, it seems like your work will be very valuable in replacing 
some of the annoying ticky-based instrumentation ideas!


Maybe we can have a call some time this or next week to discuss 
details, once Sebastian and I are more familiar with the code base?


Thanks for sticking with the project and doing all the hard work that 
can build upon!

Sebastian

-- Originalnachricht --
Von: "Csaba Hruska" 
An: "Sebastian Graf" 
Cc: "ghc-devs" ; "Sebastian Scheper" 


Gesendet: 15.12.2021 16:16:27
Betreff: Re: Transparently hooking into the STG stack to validate an 
escape analysis



Hi,

IMO the Cmm STG machine implementation is just too complex for 
student projects. It's not fun to work with at all.

Why did you choose this approach?
IMO the escape analysis development and validation would be much 
smoother and fun when you'd use the external STG interpreter.
When you have a solid and working design of your analysis and 
transformations then you could implement it in GHC's native backend 
if it needs any changes at all.


What do you think?
Do you

Re[2]: Transparently hooking into the STG stack to validate an escape analysis

2021-12-16 Thread Sebastian Graf

Hey Csaba,

After catching up on your talk and reflecting about it a bit, it seems 
quite obvious that your tool is the right way to collect the data and 
validate our analysis!
Even if meanwhile we decided that a "transparent stack frame" (which I 
believe is something similar to what you are doing here 
<https://github.com/grin-compiler/ghc-whole-program-compiler-project/blob/d3ed07d9f3167ad1afa01d1aa95aec2472b2708f/external-stg-interpreter/lib/Stg/Interpreter.hs#L130>, 
with an explicit `argCount` which we do not know) is not the ideal 
solution we've been looking for (for different reasons).


Essentially, we have two things
An escape analysis, implemented as an STG-to-STG pass that attaches a 
boolean flag to each Id whether it "escapes its scope" (for a suitable 
definition of that).
We'd like to implement it in a way that it would be reusable within GHC 
with moderate effort (e.g., renaming Binder to Id or accounting for 
different fields), operating on a module at a time rather than whole 
program.The instrumentation that tries to measure how many heap objects 
could be allocated on the stack. E.g., set a closure-specific flag 
whenever the closure is entered, unset that bit (once) when we "leave 
the scope" that defines the closure.
If my understanding is right, we could just implement this 
"instrumentation" as a simple extra field to Closure 
<https://github.com/grin-compiler/ghc-whole-program-compiler-project/blob/d3ed07d9f3167ad1afa01d1aa95aec2472b2708f/external-stg-interpreter/lib/Stg/Interpreter/Base.hs#L135>, 
right? Neat!
A bit tangential: I see that your interpreter currently allocates a 
fresh closure for let-no-escapes 
<https://github.com/grin-compiler/ghc-whole-program-compiler-project/blob/d3ed07d9f3167ad1afa01d1aa95aec2472b2708f/external-stg-interpreter/lib/Stg/Interpreter.hs#L606> 
when it could just re-use the closure of its defining scope. That would 
skew our numbers somewhat compared to instrumenting GHC-compiled 
programs, but I think we'd be able to work around that. I also wonder if 
the semantics of your let-no-escapes are actually as typically specified 
(operationally), in that a jump to a let-no-escape should also reset the 
stack pointer. It should hardly matter for the programs that GHC 
generates, though.


I would also be interested in knowing whether the +RTS -s "bytes 
allocated in the heap" metric (very nearly) coincides with a similar 
metric you could produce. It would be fantastic if that was the case! 
Theoretically, that should be possible, right?


I think your interpreter work is very valuable to collect data we 
otherwise would only be able to measure with a TickyTicky-based 
approach. Nice!
Another, similar use case would be to identify the fraction of closures 
that are only entered once. I remember that there was a ticky-based 
patch with which Joachim used to measure this fraction 
<https://gitlab.haskell.org/ghc/ghc/-/wikis/commentary/compiler/demand#instrumentation> 
(and similarly validate the analysis results), but unfortunately it 
couldn't end up in master. Ah, yes, we have a whole open ticket about 
it: #10613 <https://gitlab.haskell.org/ghc/ghc/-/issues/10613>. In fact, 
that instrumentation is also somewhat similar (adding a field to every 
closure) as what we want to do.


Anyway, it seems like your work will be very valuable in replacing some 
of the annoying ticky-based instrumentation ideas!


Maybe we can have a call some time this or next week to discuss details, 
once Sebastian and I are more familiar with the code base?


Thanks for sticking with the project and doing all the hard work that 
can build upon!

Sebastian

-- Originalnachricht --
Von: "Csaba Hruska" 
An: "Sebastian Graf" 
Cc: "ghc-devs" ; "Sebastian Scheper" 


Gesendet: 15.12.2021 16:16:27
Betreff: Re: Transparently hooking into the STG stack to validate an 
escape analysis



Hi,

IMO the Cmm STG machine implementation is just too complex for student 
projects. It's not fun to work with at all.

Why did you choose this approach?
IMO the escape analysis development and validation would be much 
smoother and fun when you'd use the external STG interpreter.
When you have a solid and working design of your analysis and 
transformations then you could implement it in GHC's native backend if 
it needs any changes at all.


What do you think?
Do you disagree?

Have you seen my presentation about the stg interpreter?
https://www.youtube.com/watch?v=Ey5OFPkxF_w

Cheers,
Csaba

On Wed, Dec 8, 2021 at 11:20 AM Sebastian Graf  
wrote:

Hi Devs,

my master's student Sebastian and I (also Sebastian :)) are working on 
an escape analysis in STG, see 
https://gitlab.haskell.org/ghc/ghc/-/issues/16891#note_347903.


We have a prototype for the escape analysis that we want to 
validate/exploit now.
The original plan was to write the transformation that a

Re: Alternatives for representing a reverse postorder numbering

2021-12-09 Thread Sebastian Graf
FWIW, performance of IntMap could be even better if we had mutable fields and a 
transient (one with freeze/thaw conversion) interface.

We'd need a GHC with 
https://github.com/ghc-proposals/ghc-proposals/pull/8/files for that, though...

I think we could also speed up substitution by using a transient substitution 
type.

Von: ghc-devs  im Auftrag von Norman Ramsey 

Gesendet: Donnerstag, Dezember 9, 2021 8:16 PM
An: Andreas Klebinger
Cc: ghc-devs@haskell.org
Betreff: Re: Alternatives for representing a reverse postorder numbering

> Which I guess mostly depends on how much mileage we get out of the
 > numbering... I rarely have lost sleep over the overhead of looking
 > things up in IntMaps.

Thank you!!  I found your analysis very helpful.  I will stick with
the IntMaps until and unless things reach a stage where they look
really ugly.

 > There is no invariant that Cmm control flow is reducible. So we
 > can't always rely on this being the case.

Good to know.  I would still like to have a simple Haskell example
that generates an irreducible control-flow graph, but for now I can
just write them by hand using .cmm files.

BTW *every* control-flow graph has at least one reverse-postorder
numbering, whether it is reducible or not.


Norman

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Transparently hooking into the STG stack to validate an escape analysis

2021-12-08 Thread Sebastian Graf

Hi Devs,

my master's student Sebastian and I (also Sebastian :)) are working on 
an escape analysis in STG, see 
https://gitlab.haskell.org/ghc/ghc/-/issues/16891#note_347903.


We have a prototype for the escape analysis that we want to 
validate/exploit now.
The original plan was to write the transformation that allocates 
non-escaping objects on the STG stack. But that is quite tricky for many 
reasons, one of them being treatment of the GC.


This mail is rather lengthy, so I suggest you skip to "What we hope 
could work" and see if you can answer it without the context I provide 
below. If you can't, I would be very grateful if you were willing to 
suffer through the exposition.


# Instrumentation

So instead we thought about doing a (easily changed and thus versatile) 
instrumentation-based approach:
Assign a sequence number to every instantiation (a term I will use to 
mean "allocation of g's closure") of things that we our analysis 
determines as escaping or non-escaping, such as STG's let bindings 
(focusing on non-let-no-escape functions for now).
(One sequence number *per allocation* of the let binding's closure, not 
based on its syntactic identity.)
Then, we set a bit in a (dynamically growing) global bit vector whenever 
the let "RHS is entered" and then unset it when we "leave the let-body". 
Example:


f = \x y ->
  let {
g = [y] \z -> y + z;
  } in g x

Here, our analysis could see that no instantiation (which I say instead 
of "allocation of g's closure") of g will ever escape its scope within 
f.
Our validation would give a fresh sequence number to the instantiation 
of g whenever f is called and store it in g's closure (which we arrange 
by piggy-backing on -prof and adding an additional field to the 
profiling header).
Then, when g's RHS is entered, we set the bit in the global bit vector, 
indicating "this instantiation of g might escape".
After leaving the RHS of g, we also leave the body of the defining let, 
which means we unset the bit in the bit vector, meaning "every use so 
far wasn't in an escaping scenario".


So far so good. Modifying the code upon entering g takes a bit of 
tinkering but can be done by building on TickyTicky in StgToCmm.
But what is not done so easily is inserting the continuation upon 
entering the let that will unset the bit!


# What doesn't work: Modifying the Sequel

At first, we tried to modify the sequel 
 
of the let-body to an `AssignTo`.
That requires us to know the registers in which the let-body will return 
its results, which in turn means we have to know the representation of 
those results, so we have to write a function `stgExprPrimRep :: 
GenStgExpr p -> [PrimRep]`.
Urgh! We were very surprised that there was no such function. And while 
we tested our function, we very soon knew why. Consider the following 
pattern synonym matcher:


GHC.Natural.$mNatJ#
  :: forall {rep :: GHC.Types.RuntimeRep} {r :: TYPE rep}.
 GHC.Num.Natural.Natural
 -> (GHC.Num.BigNat.BigNat -> r) -> ((# #) -> r) -> r
 = {} \r [scrut_sBE cont_sBF fail_sBG]
case scrut_sBE of {
  GHC.Num.Natural.NS _ -> fail_sBG GHC.Prim.(##);
  GHC.Num.Natural.NB ds_sBJ ->
  let {
sat_sBK :: GHC.Num.BigNat.BigNat
 = CCCS GHC.Num.BigNat.BN#! [ds_sBJ];
  } in  cont_sBF sat_sBK;
};

Note how its result is representation-polymorphic! It only works because 
our machine implementation allows tail-calls.
It's obvious in hindsight that we could never write `stgExprPrimRep` in 
such a way that it will work on the expression `cont_sBF sat_sBK`.

So the sequel approach seems infeasible.

# What we hope could work: A special stack frame

The other alternative would be to insert a special continuation frame on 
the stack when we enter the let-body (inspired by stg_restore_cccs).
This continuation frame would simply push all registers (FP regs, GP 
regs, Vec regs, ...) to the C stack, do its work (unsetting the bit), 
then pop all registers again and jump to the topmost continuation on the 
STG stack.

Example:

f :: forall rep (r :: TYPE rep). Int# -> (Int# -> r) -> r
f = \x g ->
  let {
h = [x] \a -> x + a;
  } in
  case h x of b {
__DEFAULT -> g b
  }

We are only interested in unsetting the bit for h here. Consider the 
stack when entering the body of h.


caller_of_f_cont_info <- Sp

Now push our special continuation frame:

caller_of_f_cont_info
seq_h
unset_bit_stk_info <- Sp

E.g., the stack frame contains the info pointer and the sequence number. 
(Btw., I hope I got the stack layout about right and this is even 
possible)
Then, after we entered the continuation of the __DEFAULT alt, we do a 
jump to g.
Plot twist: g returns an unboxed 8-tuple of `Int#`s (as 
caller_of_f_cont_info knows, but f certainly doesn't!), so before it 
returns it will push two args on 

Re: [EXTERNAL] can GHC generate an irreducible control-flow graph? If so, how?

2021-11-22 Thread Sebastian Graf
An alternative would be to mark both functions as NOINLINE, which the
Simplifier will adhere to.
You might also want to have `countA` and `countB` close over a local
variable in order for them not to be floated to the top-level.
If top-level bindings aren't an issue for you, you could simply use
mutually recursive even/odd definitions.

Otherwise, something like this might do:

foo :: Bool -> Int -> Bool
foo b n
  | n > 10= even n
  | otherwise = odd n
  where
even 0 = b
even n = odd (n-1)
{-# NOINLINE even #-}
odd 0 = b
odd n = even (n-1)
{-# NOINLINE odd #-}

GHC 8.10 will simply duplicate both functions into each branch, but GHC
master produces irreducible control flow for me:

Lib.$wfoo
  = \ (b_sTr :: Bool) (ww_sTu :: GHC.Prim.Int#) ->
  joinrec {
$wodd_sTi [InlPrag=NOINLINE] :: GHC.Prim.Int# -> Bool
[LclId[JoinId(1)], Arity=1, Str=<1L>, Unf=OtherCon []]
$wodd_sTi (ww1_sTf :: GHC.Prim.Int#)
  = case ww1_sTf of wild_X1 {
  __DEFAULT -> jump $weven_sTp (GHC.Prim.-# wild_X1 1#);
  0# -> b_sTr
};
$weven_sTp [InlPrag=NOINLINE, Occ=LoopBreaker]
  :: GHC.Prim.Int# -> Bool
[LclId[JoinId(1)], Arity=1, Str=<1L>, Unf=OtherCon []]
$weven_sTp (ww1_sTm :: GHC.Prim.Int#)
  = case ww1_sTm of wild_X1 {
  __DEFAULT -> jump $wodd_sTi (GHC.Prim.-# wild_X1 1#);
  0# -> b_sTr
}; } in
  case GHC.Prim.># ww_sTu 10# of {
__DEFAULT -> jump $wodd_sTi ww_sTu;
1# -> jump $weven_sTp ww_sTu
  }

Cheers,
Sebastian


Am Mo., 22. Nov. 2021 um 21:37 Uhr schrieb Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org>:

> GHC breaks strongly connected components with a so-called loop-breaker. In
> this case, maybe countA is the loop-breaker; then countB can inline at all
> its call sites, and it'll look very reducible.  See "Secrets of the GHC
> inliner".
>
> If you make countA and countB each call themselves, as well as the other,
> that will defeat this plan, and you may get closer to your goal.
>
> I'm guessing a bit, but hope this helps.
>
> Simon
>
> PS: I am leaving Microsoft at the end of November 2021, at which point
> simo...@microsoft.com will cease to work.  Use simon.peytonjo...@gmail.com
> instead.  (For now, it just forwards to simo...@microsoft.com.)
>
> | -Original Message-
> | From: ghc-devs  On Behalf Of Norman
> | Ramsey
> | Sent: 22 November 2021 19:52
> | To: ghc-devs@haskell.org
> | Subject: [EXTERNAL] can GHC generate an irreducible control-flow graph?
> | If so, how?
> |
> | I'm trying to figure out how to persuade GHC to generate an irreducible
> | control-flow graph (so I can test an algorithm to convert it to
> | structured control flow).
> |
> | The attached image shows (on the left) the classic simple irreducible
> | CFG: there is a loop between nodes A and B, but neither one dominates
> | the other, so there is no loop header.  I tried to get GHC to generate
> | this CFG using the following source code:
> |
> |   length'' :: Bool -> List a -> Int
> |   length'' trigger xs = if trigger then countA 0 xs else countB 0 xs
> | where countA n Nil = case n of m -> m
> |   countA n (Cons _ as) = case n + 1 of m -> countB m as
> |   countB n Nil = case n of m -> m
> |   countB n (Cons _ as) = case n + 2 of m -> countA m as
> |
> | Unfortunately (for my purposes), GHC generates a perfectly lovely
> | reducible flow graph with a single header node.
> |
> | It is even possible for GHC to generate an irreducible control-flow
> | graph?  If so, how can it be done?
> |
> |
> | Norman
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re[2]: Case split uncovered patterns in warnings or not?

2021-11-10 Thread Sebastian Graf
Yes, but that is an entirely different issue: See 
https://gitlab.haskell.org/ghc/ghc/-/issues/13964, 
https://gitlab.haskell.org/ghc/ghc/-/issues/20311 and my problems in 
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4116#note_301577 and 
following. Help is appreciated there, I don't know how to get the 
necessary information in `DsM`. Would need to poke at `mi_exports`, 
which is quite unreachable at that point. I'd probably have to add a 
field to the `DsGblEnv`.


I agree that Integer is another nail in the coffin, but only by 
coincidence. As I said in the issue, if you do an EmptyCase on `Integer` 
(which you rarely should do), then you'd be presented with the abstract 
constructors in GHC 8.8, too.


As for the issue at hand, I'll go for "case split on EmptyCase only", 
which should get back the behavior from 8.8.


-- Originalnachricht --
Von: "Vladislav Zavialov" 
An: "Oleg Grenrus" 
Cc: "ghc-devs" 
Gesendet: 10.11.2021 10:51:03
Betreff: Re: Case split uncovered patterns in warnings or not?


Integer is an interesting example. I think it reveals another issue: 
exhaustiveness checking should account for abstract data types. If the 
constructors are not exported, do not case split.

- Vlad


 On 10 Nov 2021, at 12:48, Oleg Grenrus  wrote:

 It should not. Not even when forced.

 I have seen an `Integer` constructors presented to me, for example:

 module Ex where

 foo :: Bool -> Integer -> Integer
 foo True i = i

 With GHC-8.8 the warning is good:

 % ghci-8.8.4 -Wall Ex.hs
 GHCi, version 8.8.4: https://www.haskell.org/ghc/  :? for help
 Loaded GHCi configuration from /home/phadej/.ghci
 [1 of 1] Compiling Ex   ( Ex.hs, interpreted )

 Ex.hs:4:1: warning: [-Wincomplete-patterns]
 Pattern match(es) are non-exhaustive
 In an equation for ‘foo’: Patterns not matched: False _
   |
 4 | foo True i = i
   | ^^

 With GHC-8.10 is straight up awful.
 I'm glad I don't have to explain it to any beginner,
 or person who don't know how Integer is implemented.
 (9.2 is about as bad too).

 % ghci-8.10.4 -Wall Ex.hs
 GHCi, version 8.10.4: https://www.haskell.org/ghc/  :? for help
 Loaded GHCi configuration from /home/phadej/.ghci
 [1 of 1] Compiling Ex   ( Ex.hs, interpreted )

 Ex.hs:4:1: warning: [-Wincomplete-patterns]
 Pattern match(es) are non-exhaustive
 In an equation for ‘foo’:
 Patterns not matched:
 False (integer-gmp-1.0.3.0:GHC.Integer.Type.S# _)
 False (integer-gmp-1.0.3.0:GHC.Integer.Type.Jp# _)
 False (integer-gmp-1.0.3.0:GHC.Integer.Type.Jn# _)
   |
 4 | foo True i = i
   | ^^^

 - Oleg


 On 9.11.2021 15.17, Sebastian Graf wrote:

 Hi Devs,

 In https://gitlab.haskell.org/ghc/ghc/-/issues/20642 we saw that GHC >= 8.10 
outputs pattern match warnings a little differently than it used to. Example from 
there:

 {-# OPTIONS_GHC -Wincomplete-uni-patterns #-}

 foo :: [a] -> [a]
 foo [] = []
 foo xs = ys
   where
   (_, ys@(_:_)) = splitAt 0 xs

 main :: IO ()
 main = putStrLn $ foo $ "Hello, coverage checker!"
 Instead of saying



 ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]
 Pattern match(es) are non-exhaustive
 In a pattern binding: Patterns not matched: (_, [])



 We now say



 ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]
 Pattern match(es) are non-exhaustive
 In a pattern binding:
 Patterns of type ‘([a], [a])’ not matched:
 ([], [])
 ((_:_), [])



 E.g., newer versions do (one) case split on pattern variables that haven't 
even been scrutinised by the pattern match. That amounts to quantitatively more 
pattern suggestions and for each variable a list of constructors that could be 
matched on.
 The motivation for the change is outlined in 
https://gitlab.haskell.org/ghc/ghc/-/issues/20642#note_390110, but I could 
easily be swayed not to do the case split. Which arguably is less surprising, 
as Andreas Abel points out.

 Considering the other examples from my post, which would you prefer?

 Cheers,
 Sebastian


 ___
 ghc-devs mailing list

ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

 ___
 ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re[2]: Case split uncovered patterns in warnings or not?

2021-11-09 Thread Sebastian Graf
I agree in principle, but then what about data types with strict fields? 
E.g.


data SMaybe a = SNothing | SJust !a

f :: SMaybe Bool -> ()
f SNothing = ()

Today, we'd suggest `SJust _`.
But the checker can't differentiate between evaluation done by a 
pattern-match of the user vs. something like a strict field that was 
unlifted to begin with.

So we'd suggest `SJust True` and `SJust False`.

Similarly, we'd case split unlifted data types by default, but not 
lifted data types.


I think I can easily make the whole function 
(`GHC.HsToCore.Pmc.Solver.generateInhabitingPatterns`) dependent on 
whether it's called from an EmptyCase or not, to recover the behavior 
pre-8.10.
But actually I had hoped we can come up with something more general and 
less ad-hoc than the behavior of 8.8. Maybe there isn't and 8.8 already 
lived in the sweet spot.


-- Originalnachricht --
Von: "Richard Eisenberg" 
An: "Sebastian Graf" 
Cc: "ghc-devs" 
Gesendet: 10.11.2021 04:44:50
Betreff: Re: Case split uncovered patterns in warnings or not?

Maybe the answer should depend on whether the scrutinee has already 
been forced. The new output ("We now say", below) offers up patterns 
that will change the strictness behavior of the code. The old output 
did not.


Reading the link below, I see that, previously, there was an 
inconsistency with -XEmptyCase, which *did* unroll one level of 
constructor. But maybe that made sense because -XEmptyCase is strict 
(unlike normal case).


I'm just saying this because I think listing the constructors in the 
-XEmptyCase case is a good practice, but otherwise I think they're 
clutterful... and strictness is a perhaps more principled way of making 
this distinction.


Richard

On Nov 9, 2021, at 8:17 AM, Sebastian Graf  
wrote:


Hi Devs,

In https://gitlab.haskell.org/ghc/ghc/-/issues/20642 we saw that GHC 
>= 8.10 outputs pattern match warnings a little differently than it 
used to. Example from there:


{-# OPTIONS_GHC -Wincomplete-uni-patterns 
#-}foo::[a]->[a]foo[]=[]fooxs=yswhere(_,ys@(_:_))=splitAt0xsmain::IO()main=putStrLn$foo$"Hello,
 coverage checker!"
Instead of saying

ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]Pattern match(es) are 
non-exhaustiveIn a pattern binding: Patterns not matched: (_, [])
We now say

ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]Pattern match(es) are 
non-exhaustiveIn a pattern binding:Patterns of type ‘([a], [a])’ 
not matched:([], [])((_:_), [])
E.g., newer versions do (one) case split on pattern variables that 
haven't even been scrutinised by the pattern match. That amounts to 
quantitatively more pattern suggestions and for each variable a list 
of constructors that could be matched on.
The motivation for the change is outlined in 
https://gitlab.haskell.org/ghc/ghc/-/issues/20642#note_390110, but I 
could easily be swayed not to do the case split. Which arguably is 
less surprising, as Andreas Abel points out.


Considering the other examples from my post, which would you prefer?

Cheers,
Sebastian
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Case split uncovered patterns in warnings or not?

2021-11-09 Thread Sebastian Graf
Hi Devs,

In https://gitlab.haskell.org/ghc/ghc/-/issues/20642 we saw that GHC >=
8.10 outputs pattern match warnings a little differently than it used to.
Example from there:

{-# OPTIONS_GHC -Wincomplete-uni-patterns #-}foo :: [a] -> [a]foo [] =
[]foo xs = ys  where  (_, ys@(_:_)) = splitAt 0 xsmain :: IO ()main =
putStrLn $ foo $ "Hello, coverage checker!"

Instead of saying

ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]Pattern
match(es) are non-exhaustiveIn a pattern binding: Patterns not
matched: (_, [])

We now say

ListPair.hs:7:3: warning: [-Wincomplete-uni-patterns]Pattern
match(es) are non-exhaustiveIn a pattern binding:Patterns
of type ‘([a], [a])’ not matched:([], [])
((_:_), [])

E.g., newer versions do (one) case split on pattern variables that haven't
even been scrutinised by the pattern match. That amounts to quantitatively
more pattern suggestions and for each variable a list of constructors that
could be matched on.
The motivation for the change is outlined in
https://gitlab.haskell.org/ghc/ghc/-/issues/20642#note_390110, but I could
easily be swayed not to do the case split. Which arguably is less
surprising, as Andreas Abel points out.

Considering the other examples from my post, which would you prefer?

Cheers,
Sebastian
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Optics?

2021-10-04 Thread Sebastian Graf
Hi Alan, hi Vlad,

Yes, one thing that is nice about van Laarhoven lenses is that you don't
actually need to depend on anything if all you want is export lenses in
your API.

We have also discussed using a small lens library in the past, in
https://gitlab.haskell.org/ghc/ghc/-/issues/18693.
The MVP would be to just depend on the Lens module defined in newer
versions of Cabal.

Sebastian

Am Mo., 4. Okt. 2021 um 00:35 Uhr schrieb Alan & Kim Zimmerman <
alan.z...@gmail.com>:

> With a pointer from Vlad and some study of the lens tutorial, I made a
> proof of concept at [1].
> I am deliberately not using the existing lens library as I envisage this
> code ending up in GHC.
>
> Alan
>
> [1]
> https://github.com/alanz/ghc-exactprint/blob/f218e211c47943c216a2e25d7855f98a0355f6b8/src/Language/Haskell/GHC/ExactPrint/ExactPrint.hs#L689-L723
>
>
>
> On Sun, 3 Oct 2021 at 18:52, Vladislav Zavialov 
> wrote:
>
>> Hi Alan,
>>
>> Your pair of functions can be packaged up as a single function, so that
>>
>> getEpa :: a -> EpaLocation
>> setEpa :: a -> EpaLocation -> a
>>
>> becomes
>>
>> lensEpa :: forall f. Functor f => (EpaLocation -> f EpaLocation)
>> -> (a -> f a)
>>
>> And the get/set parts can be recovered by instantiating `f` to either
>> Identity or Const.
>>
>> The nice thing about lenses is that they compose, so that if you need
>> nested access, you could define several lenses, compose them together, and
>> then reach deep into a data structure. Then lenses might offer some
>> simplification. Otherwise, an ordinary getter/setter pair is just as good.
>>
>> - Vlad
>>
>> > On 3 Oct 2021, at 20:40, Alan & Kim Zimmerman 
>> wrote:
>> >
>> > Hi all
>> >
>> > I am working on a variant of the exact printer which updates the
>> annotation locations from the `EpaSpan` version to the `EpaDelta` version,
>> as the printing happens
>> >
>> > data EpaLocation = EpaSpan RealSrcSpan
>> >  | EpaDelta DeltaPos
>> >
>> > The function doing the work is this
>> >
>> > markAnnKw :: (Monad m, Monoid w)
>> >   => EpAnn a -> (a -> EpaLocation) -> (a -> EpaLocation -> a) ->
>> AnnKeywordId -> EP w m (EpAnn a)
>> >
>> > which gets an annotation, a function to pull a specific location out,
>> and one to update it.
>> >
>> > I do not know much about lenses, but have a feeling that I could
>> simplify things by using one.
>> >
>> > Can anyone give me any pointers?
>> >
>> > Alan
>> >
>> > ___
>> > ghc-devs mailing list
>> > ghc-devs@haskell.org
>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Documenting GHC: blogs, wiki pages, Notes, Haddocks, etc

2021-09-14 Thread Sebastian Graf
What I don't like:

   - I have to give a section name to an otherwise unused $chunk
   - I can't refer to $chunk from other modules
   - Everywhere I reference $chunk in a haddock, it gets inlined instead of
   appearing with its title or inserting a link with the section name it is
   currently decoupled from.


Am Di., 14. Sept. 2021 um 16:25 Uhr schrieb Hécate :

> Hi,
>
> The named chunks can be positioned through the use of the export list
> syntax:
>
> module Foo
>   ( main
>   -- * Section name that will appear
>   --
>   -- $chunk
>   )
>
> This should produce a free section that is not linked to any exported item.
> I see you're already using them though, so maybe I am understanding
> something else?
> Le 14/09/2021 à 16:00, Sebastian Graf a écrit :
>
> Hi,
>
> I've been using Haddock's named chunks feature too when writing the docs
> for selective lambda lifting.
> This is the result:
> https://hackage.haskell.org/package/ghc-8.10.2/docs/StgLiftLams.html, and
> this is how the source code looks:
> https://hackage.haskell.org/package/ghc-8.10.2/docs/src/StgLiftLams.html
>
> I quite like it. As you can see, I enabled both the existing Notes
> workflow and Haddock to work with it. It takes a bit of annoying extra
> work, though. Ideally, Haddock would simply recognise the Note syntax
> directly or provide a similar alternative.
>
> And as far as linking is concerned: Sure, haddocks don't have a title to
> refer to. But you can always link to them by linking to the syntactic
> entity! For example, if I want to link to DiagnosticReason from Severity, I
> can simply do so by saying "Also see 'Severity'".
> I do admit this might not be enough info at the reference site to
> determine whether the haddock linked to is relevant to the particular goal
> I want to achieve. Also as Simon points out, there are Notes that don't
> have a clear "owner".
>
> Heck, even writing an unused binding `_Late_lambda_lifting_in_STG` and put
> the haddocks there would work, I suppose. We could simply link to it with
> '_Late_lambda_lifting_in_STG' from other haddocks.
>
> My point is: If we managed to have something quite like named chunks, but
> with a title and one place it gets rendered and then linked to (I don't
> like that named chunks are inlined into every use site), we could probably
> agree on using that.
> Also I'd like to see the Notes rendered *regardless* of whether the thing
> it is attached to is exported. That would make Notes a lot more accessible.
>
> Sebastian
>
>
> Am Di., 14. Sept. 2021 um 14:32 Uhr schrieb Hécate :
>
>> > today’s Haddock doesn’t understand Notes.  But we could fix that if we
>> were minded to.
>>
>> I may have missed an episode or two here but what prevents us from
>> writing Notes as Named Chunks¹, write them where Haddock expects you to put
>> documentation, and refer to them from the relevant spot in the code?
>> Viktor (in CC) has done a wonderful work at producing nice layouts for
>> Haddocks in base, and we could learn a couple of lessons from his MRs.
>>
>> ---
>>
>> Now, on the matter of improving Haddock to understand GHC's notes, I'd
>> like to remind everyone that Haddock is currently understaffed in terms of
>> feature development, and I would like to call to everyone with experience
>> dealing with its codebase to give a hand in refactoring, dusting off and
>> improving the code so that its maintainability is not jeopardised by people
>> simply going elsewhere.
>> Our bus factor (or as I like to call it, circus factor), is quite
>> terrifying considering the importance of the tool in our ecosystem.
>>
>>
>> ¹
>> https://haskell-haddock.readthedocs.io/en/latest/markup.html#named-chunks
>>
>> Le 14/09/2021 à 13:56, Simon Peyton Jones via ghc-devs a écrit :
>>
>> Alfredo writes (below for full thread)
>>
>>
>>
>> That is a deceptively simple question you ask there :-) I don't have a
>> strong view myself, but I can offer the perspective of somebody who was
>> been for a long time on the "other side of the trenches" (i.e. working
>> Haskell programmer, not necessarily working GHC programmer):
>>
>>
>>
>> * Blog post: yes, it's true that is a snapshot, and it's true that is not
>> under GHC's gitlab umbrella, so I wouldn't treat it as a reliable source of
>> documentation (for the reasons you also explain) but it's surely a good
>> testament that "at this point in time, for this particular GHC commit,
>> things were this way);
>>
>>
>>
>> * The wiki page: in the past, when I wanted to lear

Re: Documenting GHC: blogs, wiki pages, Notes, Haddocks, etc

2021-09-14 Thread Sebastian Graf
Hi,

I've been using Haddock's named chunks feature too when writing the docs
for selective lambda lifting.
This is the result:
https://hackage.haskell.org/package/ghc-8.10.2/docs/StgLiftLams.html, and
this is how the source code looks:
https://hackage.haskell.org/package/ghc-8.10.2/docs/src/StgLiftLams.html

I quite like it. As you can see, I enabled both the existing Notes workflow
and Haddock to work with it. It takes a bit of annoying extra work, though.
Ideally, Haddock would simply recognise the Note syntax directly or provide
a similar alternative.

And as far as linking is concerned: Sure, haddocks don't have a title to
refer to. But you can always link to them by linking to the syntactic
entity! For example, if I want to link to DiagnosticReason from Severity, I
can simply do so by saying "Also see 'Severity'".
I do admit this might not be enough info at the reference site to determine
whether the haddock linked to is relevant to the particular goal I want to
achieve. Also as Simon points out, there are Notes that don't have a clear
"owner".

Heck, even writing an unused binding `_Late_lambda_lifting_in_STG` and put
the haddocks there would work, I suppose. We could simply link to it with
'_Late_lambda_lifting_in_STG' from other haddocks.

My point is: If we managed to have something quite like named chunks, but
with a title and one place it gets rendered and then linked to (I don't
like that named chunks are inlined into every use site), we could probably
agree on using that.
Also I'd like to see the Notes rendered *regardless* of whether the thing
it is attached to is exported. That would make Notes a lot more accessible.

Sebastian


Am Di., 14. Sept. 2021 um 14:32 Uhr schrieb Hécate :

> > today’s Haddock doesn’t understand Notes.  But we could fix that if we
> were minded to.
>
> I may have missed an episode or two here but what prevents us from writing
> Notes as Named Chunks¹, write them where Haddock expects you to put
> documentation, and refer to them from the relevant spot in the code?
> Viktor (in CC) has done a wonderful work at producing nice layouts for
> Haddocks in base, and we could learn a couple of lessons from his MRs.
>
> ---
>
> Now, on the matter of improving Haddock to understand GHC's notes, I'd
> like to remind everyone that Haddock is currently understaffed in terms of
> feature development, and I would like to call to everyone with experience
> dealing with its codebase to give a hand in refactoring, dusting off and
> improving the code so that its maintainability is not jeopardised by people
> simply going elsewhere.
> Our bus factor (or as I like to call it, circus factor), is quite
> terrifying considering the importance of the tool in our ecosystem.
>
>
> ¹
> https://haskell-haddock.readthedocs.io/en/latest/markup.html#named-chunks
>
> Le 14/09/2021 à 13:56, Simon Peyton Jones via ghc-devs a écrit :
>
> Alfredo writes (below for full thread)
>
>
>
> That is a deceptively simple question you ask there :-) I don't have a
> strong view myself, but I can offer the perspective of somebody who was
> been for a long time on the "other side of the trenches" (i.e. working
> Haskell programmer, not necessarily working GHC programmer):
>
>
>
> * Blog post: yes, it's true that is a snapshot, and it's true that is not
> under GHC's gitlab umbrella, so I wouldn't treat it as a reliable source of
> documentation (for the reasons you also explain) but it's surely a good
> testament that "at this point in time, for this particular GHC commit,
> things were this way);
>
>
>
> * The wiki page: in the past, when I wanted to learn more about some GHC
> feature, Google would point me to the relevant Wiki page on the GHC repo
> describing such a feature, but I have to say I have almost always dismissed
> it, because everybody knows Wikis are seldomly up-to-date :) In order for a
> Wiki page to work we would have to at least add a banner at the top that
> states this can be trusted as a reliable source of information, and offer
> in the main section the current, up-to-date design. We can still offer the
> historical breakdown of the designs in later sections, as it's still
> valuable info to keep;
>
>
>
> * GHC notes: I have always considered GHC notes a double-edge sword --
> from one side they are immensely useful when navigating the source code,
> but these won't be rendered in the Hackage's haddocks, and this is not
> helpful for GHC-the-library users willing to understand how to use (or
> which is the semantic of) a particular type (sure, one can click "Show
> Source" on Hackage but it's an annoying extra step to do just to hunt for
> notes). We already have Notes for this work in strategic places -- even
> better, we have proper Haddock comments for things like "Severity vs
> DiagnosticReason" , e.g.
> https://gitlab.haskell.org/ghc/ghc/-/blob/master/compiler/GHC/Types/Error.hs#L279
> 

Re: Potential improvement in compacting GC

2021-09-02 Thread Sebastian Graf
Hey Ömer,

Just in case you are wondering whether you are talking into the void: you 
aren't! Keep the good suggestions coming, someone will eventually be able to 
get around to implementing them!
Thanks for your write-ups.

Cheers,
Sebastian

Von: ghc-devs  im Auftrag von Ömer Sinan Ağacan 

Gesendet: Thursday, September 2, 2021 5:47:08 PM
An: ghc-devs 
Betreff: Re: Potential improvement in compacting GC

Here's another improvement that fixes a very old issue in GHC's compacting GC
[1].

To summarize the problem: when untreading an object we update references to the
object that we've seen so far to the object's new location. But to get the
object's new location we need to know the object's size, because depending on
the size we may need to move the object to a new block (when the current block
does not have enough space for the object).

For this we currently walk the thread twice, once to get the info table (used
to get the size), then again to update references to the object. Ideally we
want to do just one pass when unthreading.

The solution is explained in The GC Handbook, section 3.4. Instead of using one
bit to mark an object, we use two bits: one for the first word of the object,
one for the last word. Using two bits is not a problem in GHC because heap
objects are at least 2 words. For example, an object with two words is marked
with `11`, 3 words is marked with `101` and so on.

Now we can scan the bitmap to find object size, and unthread it without having
to find the info table first.

Ömer

[1]: 
https://github.com/ghc/ghc/blob/922c6bc8dd8d089cfe4b90ec2120cb48959ba2b5/rts/sm/Compact.c#L844-L848

Ömer Sinan Ağacan , 14 Tem 2021 Çar, 09:27
tarihinde şunu yazdı:
>
> Two other ideas that should improve GHC's compacting GC much more
> significantly. I've implemented both of these in another project and the
> results were great. In a GC benchmark (mutator does almost no work other than
> allocating data using a bump allocator), first one reduced Wasm instructions
> executed by 14%, second one 19.8%.
>
> Both of these ideas require pushing object headers to the mark stack with the
> objects, which means larger mark stacks. This is the only downside of these
> algorithms.
>
> - Instead of marking and then threading in the next pass, mark phase threads
>   all fields when pushing the fields to the mark stack. We still need two 
> other
>   passes: one to unthread headers, another to move the objects. (we can't do
>   both in one pass, let me know if you're curious and I can elaborate)
>
>   This has the same number of passes as the current implementation, but it 
> only
>   visits object fields once. Currently, we visit fields once when marking, to
>   mark fields, then again in `update_fwd`. This implementation does both in 
> one
>   pass over the fields. `update_fwd` does not visit fields.
>
>   This reduced Wasm instructions executed by 14% in my benchmark.
>
> - Marking phase threads backwards pointers (ignores forwards pointers). Then 
> we
>   do one pass instead of two: for a marked object, unthread it (update
>   forwards pointers to the object's new location), move it to its new 
> location,
>   then thread its forwards pointers. This completely eliminates one of the 3
>   passes, but fields need to be visited twice as before (unlike the algorithm
>   above).
>
>   I think this one is originally described in "An Efficient Garbage Compaction
>   Algorithm", but I found that paper to be difficult to follow. A short
>   description of the same algorithm exists in "High-Performance Garbage
>   Collection for Memory-Constrained Environments" in section 5.1.2.
>
>   This reduced Wasm instructions executed by 19.8% in my benchmark.
>
>   In this algorithm, fields that won't be moved can be threaded any time 
> before
>   the second pass (pointed objects need to be marked and pushed to the mark
>   stack with headers before threading a field). For example, in GHC, mut list
>   entries can be threaded before or after marking (but before the second pass)
>   as IIRC mut lists are not moved. Same for fields of large objects.
>
> As far as I can see, mark-compact GC is still the default when max heap size 
> is
> specified and the oldest generation size is (by default) more than 30% of the
> max heap size. I'm not sure if max heap size is specified often (it's off by
> default), so not sure what would be the impact of these improvements be, but 
> if
> anyone would be interested in funding me to implement these ideas (second
> algorithm above, and the bitmap iteration in the previous email) I could try 
> to
> allocate one or two days a week to finish in a few months.
>
> Normally these are simple changes, but it's difficult to test and debug GHC's
> RTS as we don't have a test suite readily available and the code is not easily
> testable. In my previous implementations of these algorithms I had unit tests
> for the GC where I could easily generate arbitrary graphs (with 

Re[2]: GHC and the future of Freenode

2021-06-16 Thread Sebastian Graf
Re: memory usage: I get that people don't like bloated Electron clients 
when they already run a browser instance, but fortunately, Element 
doesn't need to be run as a standalone app (*).
Well, except when you want to search encrypted history. But then you're 
out of luck with irccloud, too...


I run Element in a pinned Firefox tab, and according to its task manager 
it reports 27.4MB, which is about the same for my WhatsApp tab. It's a 
bit more than irccloud (about 20MB), but I also have quite a few chats 
in that one tab. It's also quite a lot less than every individual MR 
page of our GitLab instance (27-110MB), of which I have pinned about 
5-10 at any time, so it really doesn't matter much.


Since the Matrix<->Libera.chat bridge works now, I have next to no 
incentive to push for Matrix at the moment, although I still would 
prefer it.
All arguments have been said, so I won't repeat them. It seems like this 
discussion will go stale in time without anyone stepping up to force a 
migration. Fine with me.


Sebastian

-- Originalnachricht --
Von: "Jakub Zalewski" 
An: "Janek Stolarek" ; ghc-devs@haskell.org
Gesendet: 16.06.2021 17:02:28
Betreff: Re: GHC and the future of Freenode


On Tue Jun 15, 2021 at 3:18 PM CEST, Janek Stolarek wrote:

 Apparently, Freenode deleted all registered users and channels several
 hours ago.


I guess that should solve the problem of communities being split between
Freenode and Libera.

If I may add towards using IRC, even though it may seem archaic towards
newcomers:

- it allows them to join without registration (via web.libera.chat), and
- it allows them to use a client of their choosing (I am not aware of
  any stable Matrix clients beside Element).

From my own experience, Electron-based clients (Element included)
consume unreasonable amounts of RAM:  Element is consuming ~700MiB of
RAM just to stay idle on my machine.  To give a fair comparison, the tab
for irccloud tends to consume ~300MiB.

--
Jakub
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC and the future of Freenode

2021-05-19 Thread Sebastian Graf

Hi,

As one of those contributors that is already using the 
Matrix-to-freenode-IRC bridge through http://element.io/, I'd prefer 
moving to Matrix.
And *if* we commit to a move, I suggest we don't move to another IRC 
server. That leaves Zulip vs. Matrix, both of which I'd be fine with.


For some more data points: I know that the Lean community uses Zulip and 
they are pretty happy with it. The threading model seems to be pretty 
useful.
On the other hand, it's so easy to open a new group chat in Matrix (and 
name it whatever you want) that it is pretty much the same as opening a 
new thread in Zulip.
Our chair used to communicate via IRC, too. We considered switching to 
Zulip or Matrix in the past and ultimately decided in favor of Matrix, 
simply because most of us were already using element.io (+ the IRC 
bridge) for its mobile client and history logging.


Cheers,
Sebastian

-- Originalnachricht --
Von: "Ben Gamari" 
An: "GHC developers" 
Gesendet: 19.05.2021 14:56:23
Betreff: GHC and the future of Freenode


Hi all,

As you may have heard the Freenode IRC network, long the home of #ghc
and several other prominent Haskell channels, appears to be in the
middle of a rather nasty hostile takeover [1,2,3,4]. As a consequence,
it seems it will be necessary to migrate the #ghc community elsewhere.

The next question is, of course, where will this be. One option is
Liberachat, the spiritual successor of Freenode. However, in recent
years I have also heard an increasingly loud choir of users,
contributors, and potential-contributors note how archane IRC feels when
compared to other modern chat platforms. Using IRC effectively in a
collaborative environment essentially requires that all parties use a
bouncer; however, this is (understandably) isn't something that most users
are willing to do.

Consequently, I think it would be wise to expand our search space to
include other FOSS platforms. At the moment, I can see the following
options:

 1. Remain on IRC and move to Liberachat, the spiritual successor of
Freenode

 2. Remain on IRC and move to OFTC, another widely used network

 3. Move to [Matrix]

 4. Move to [Zulip]

My sense is that of the non-IRC options Matrix is the truest successor
to IRC, being a federated protocol with a wide array of (if somewhat
immature) clients. I know some of our contributors already use it and in
principle one could configure an IRC-to-Matrix bridge for those existing
contributors who would rather continue using IRC.

Zulip, while also being FOSS, is far more centralized than Matrix and
appears to be more of a chat web application than an open protocol.

Do you know of any other options? Thoughts?

Cheers,

- Ben



[1]: https://gist.github.com/joepie91/df80d8d36cd9d1bde46ba018af497409
[2]: https://fuchsnet.ch/freenode-resign-letter.txt
[3]: https://gist.github.com/aaronmdjones/1a9a93ded5b7d162c3f58bdd66b8f491
[4]: https://mniip.com/freenode.txt
[Matrix]: https://www.matrix.org/
[Zulip]:___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Coding style: Using StandaloneKindSignatures in GHC

2021-05-18 Thread Sebastian Graf

Hi Baldur,

I'd be fine with declaring a SAKS whenever I'd need to specify a kind 
signature anyway.
But so far I never needed to specify a kind in the data types or type 
synonyms I declare.
I'd say that providing SAKS for types like `OrdList` or `State` where 
kinds are inferred just fine is overkill, but ultimately I won't fight 
if the majority likes to do that...


Sebastian

-- Originalnachricht --
Von: "Baldur Blöndal" 
An: ghc-devs@haskell.org
Gesendet: 18.05.2021 19:58:18
Betreff: Coding style: Using StandaloneKindSignatures in GHC


Discussion to permit use of StandaloneKindSignatures in the GHC coding
style guide. I believe it increases the clarity of the code,
especially as we move to fancier kinds.

It is the only way we have for giving full signatures to type
synonyms, type classes, type families and others. An example:

type Cat :: Type -> Type
type Cat ob = ob -> ob -> Type

type  Category :: forall ob. Cat ob -> Constraint
class Category cat where
  id :: cat a a ..

type Proxy :: forall k. k -> Type
data Proxy a = Proxy

type Some :: forall k. (k -> Type) -> Type
data Some f where
  Some :: f ex -> Some f

-- | The regular function type
type (->) :: forall {rep1 :: RuntimeRep} {rep2 :: RuntimeRep}.
TYPE1 rep1 -> TYPE rep2 -> Type
type (->) = FUN 'Many

This is in line with function definitions that are always given a
top-level, standalone type signature (1) and not like we currently
define type families/synonyms (2) by annotating each argument or not
at all. Using -XStandaloneKindSignatures (3) matches (1)

-- (1)
curry :: ((a, b) -> c) -> (a -> b -> c)
curry f  x y = f (x, y)

-- (2)
type Curry (f :: (a, b) -> c) (x :: a) (y :: b) =  f '(x, y) :: c

-- (3)
type Curry :: ((a, b) -> c) -> (a -> b -> c)
type Curry f x y = f '(x, y)

It covers an edgecase that `KindSignatures` don't. The only way for
deriving to reference datatype arguments is if they are quantified by
the declaration head -- `newtype Bin a ..`. StandaloneKindSignatures
allows us to still provide a full signature. We could write `newtype
Bin a :: Type -> Type` without it but not `newtype Bin :: Type -> Type
-> Type`

typeBin :: Type -> Type -> Type
newtype Bin a b = Bin (a -> a -> b)
  deriving (Functor, Applicative)
  via (->) a `Compose` (->) a

Let me know what you think
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: simple Haskell help needed on #19746

2021-04-27 Thread Sebastian Graf
Hi Richard,

Maybe I lack a bit of context, but I don't see why you wouldn't choose (3).
Extending the lexer/parser will yield a declarative specification of what
exactly constitutes a GHC_OPTIONS pragma (albeit in a language that isn't
Haskell) and should be more efficient than `reads`, even if you fix it to
scale linearly. Plus, it seems that's what we do for other pragmas such as
RULE already.

That's my opinion anyway.
Cheers,
Sebastian

Am Di., 27. Apr. 2021 um 21:06 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

> Hi devs,
>
> tl;dr: Is there any (efficient) way to get the String consumed by a
> `reads`?
>
> I'm stuck in thinking about a fix for #19746. Happily, the problem is
> simple enough that I could assign it in the first few weeks of a Haskell
> course... and yet I can't find a good solution! So I pose it here for
> inspiration.
>
> The high-level problem: Assign correct source spans to options within a
> OPTIONS_GHC pragma.
>
> Current approach: The payload of an OPTIONS_GHC pragma gets turned into a
> String and then processed by GHC.Utils.Misc.toArgs :: String -> Either
> String [String]. The result of toArgs is either an error string (the Left
> result) or a list of lexed options (the Right result).
>
> A little-known fact is that Haskell strings can be put in a OPTIONS_GHC
> pragma. So I can write both {-# OPTIONS_GHC -funbox-strict-fields #-} and
> {-# OPTIONS_GHC "-funbox-strict-fieds" #-}. Even stranger, I can write {-#
> OPTIONS_GHC ["-funbox-strict-fields"] #-}, where GHC will understand a list
> of strings. While I don't really understand the motivation for this last
> feature (I posted #19750 about this), the middle option, with the quotes,
> seems like it might be useful.
>
> Desired approach: change toArgs to have this type: RealSrcLoc -> String ->
> Either String [Located String], where the input RealSrcLoc is the location
> of the first character of the input String. Then, as toArgs processes the
> input, it advances the RealSrcLoc (with advanceSrcLoc), allowing us to
> create correct SrcSpans for each String.
>
> Annoying fact: Not all characters advance the source location by one
> character. Tabs and newlines don't. Perhaps some other characters don't,
> too.
>
> Central stumbling block: toArgs uses `reads` to parse strings. This makes
> great sense, because `reads` already knows how to convert Haskell String
> syntax into a proper String. The problem is that we have no idea what
> characters were consumed by `reads`. And, short of looking at the length of
> the remainder string in `reads` and comparing it to the length of the input
> string, there seems to be no way to recreate this lost information. Note
> that comparing lengths is slow, because we're dealing with Strings here.
> Once we know what was consumed by `reads`, then we can just repeatedly call
> advancedSrcLoc, and away we go.
>
> Ideas to get unblocked:
> 1. Just do the slow (quadratic in the number of options) thing, looking at
> the lengths of strings often.
> 2. Reimplement reading of strings to return both the result and the
> characters consumed
> 3. Incorporate the parsing of OPTIONS_GHC right into the lexer
>
> It boggles me that there isn't a better solution here. Do you see one?
>
> Thanks,
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: instance {Semigroup, Monoid} (Bag a) ?

2021-04-14 Thread Sebastian Graf
Hi Richard,

I've been guilty of slipping in similar instances myself. In fact, I like
OrdList better than Bag precisely because it has more instances and thus a
far better interface.
Not being able to see whether mempty denotes a Bag should be as simple as a
mouse hover with HLS set up.
So a +99 from me.

Cheers,
Sebastian



Am Mi., 14. Apr. 2021 um 20:28 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

> Hi devs,
>
> In the work on simplifying the error-message infrastructure (heavy lifting
> by Alfredo, in cc), I've been tempted (twice!) to add
>
> > instance Semigroup (Bag a) where
> >   (<>) = unionBags
> >
> > instance Monoid (Bag a) where
> >   mempty = emptyBag
>
> to GHC.Data.Bag.
>
> The downside to writing these is that users might be tempted to write e.g.
> mempty instead of emptyBag, while the latter gives more information to
> readers and induces less manual type inference (to a human reader). The
> upside is that it means Bags work well with Monoid-oriented functions, like
> foldMap.
>
> I favor adding them, and slipped them into !5509 (a big commit with lots
> of other stuff). Alfredo rightly wondered whether this decision deserved
> more scrutiny, and so I'm asking the question here.
>
> What do we think?
>
> Thanks,
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How does GHC implement layout?

2021-04-05 Thread Sebastian Graf
Hi Alexis, Hi Iavor,

I'm afraid I'm not particularly acquainted with how GHC implements
indentation-sensitive parsing, but I really like the way in which this book
 frames the problem. If you
look at the preview for the first chapter, you'll notice that (they call
the lexer scanner, and) they introduce an additional pass between lexer and
parser that handles the context-sensitive bits about indentation-sensitive
parsing, which doesn't fit well with the parser (which assumes a
context-free grammar to stay sane) or with the lexer (which should better
be just a simple DFA).

In particular, the short section 1.3 about the screener explicitly mentions
Haskell as use case:

[image: 2021-04-05-140820_543x163_scrot.png]
Although that doesn't really explain the how. Maybe section 2.5 (where impl
of screeners is covered) provides more insight into that.
Screeners are a bit like semantic analysis, but on the token stream instead
of the parse tree.

Reach out in private if you want more excerpts.

Cheers,
Sebastian

Am Mo., 5. Apr. 2021 um 00:19 Uhr schrieb Alexis King :

> On 4/4/21 1:52 PM, Iavor Diatchki wrote:
>
> Hi Alexis,
>
> I wasn't sure what the "alternative layout" is either and did some
> googling, and it appears that it is something that was never really
> documented properly.   The following link contains pointers to the commit
> that introduced it (in 2009!)  (not the main ticket but some of the
> comments)
>
> Thanks, that’s a helpful pointer, though of course it still doesn’t
> explain very much. I’m still interested in understanding what the purpose
> of “alternative layout” is and how it operates, if anyone else has any idea.
>
> Overall, I do think that Haskell's layout rule is more complicated than it
> needs to be, and this is mostly because of the rule that requires the
> insertion of a "virtual close curly" on a parse error.
>
> Yes, this does seem to be by far the trickiest bit. But I’d be sad not to
> have it, as without it, even simple things like
>
> let x = 3 in e
>
> would not be grammatically valid.
>
> My feeling is that it'd be pretty tricky to do layout in the parser with
> grammar rules, but you may be able to do something with the parser state.
>
> Yes, I have some vague ideas, but none of them are particularly fleshed
> out. It’s entirely possible that I just don’t understand the relationship
> between the lexer and the parser (which seems somewhat obscured by the
> “alternative layout” stuff), and the ideas I have are what’s already
> implemented today. I’ll have to study the implementation more closely.
>
> In any case, thank you for your response! The ALR-related pointer
> certainly clarifies at least a little.
>
> Alexis
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Multiple versions of happy

2021-03-30 Thread Sebastian Graf
Hi Simon,

According to the configure script, you can use the HAPPY env variable. e.g.

$ HAPPY=/full/path/to/happy ./configure

Hope that helps. Cheers,
Sebastian

Am Di., 30. März 2021 um 15:19 Uhr schrieb Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org>:

> What’s the approved mechanism to install multiple versions of happy/alex
> etc?  Eg I tried to build ghc-9.0 and got this:
>
> checking for makeinfo... no
>
> checking for python3... /usr/bin/python3
>
> checking for ghc-pkg matching /opt/ghc/bin/ghc... /opt/ghc/bin/ghc-pkg
>
> checking for happy... /home/simonpj/.cabal/bin/happy
>
> checking for version of happy... 1.20.0
>
> configure: error: Happy version 1.19 is required to compile GHC.
>
>
>
> I so I have to
>
>1. Install happy-1.19 without overwriting the installed happy-1.20
>2. Tell configure to use happy-1.19
>
> What’s the best way to do those two things?
>
> Thanks
>
> Simon
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type inference of singular matches on GADTs

2021-03-28 Thread Sebastian Graf
Hi Alexis,

If you really want to get by without type annotations, then Viktor's
pattern synonym suggestion really is your best option! Note that

pattern HNil :: HList '[];
pattern HNil = HNil_

Does not actually declare an HNil that is completely synonymous with HNil_,
but it changes the *provided* GADT constraint (as ~ '[]) into a *required*
constraint (as ~ '[]).
"Provided" as in "a pattern match on the synonym provides this constraint
as a new Given", "required" as in "... requires this constraint as a new
Wanted". (I hope I used the terminology correctly here.)
Thus, a pattern ((a :: Int) `HCons` HNil) really has type (HList '[Int])
and is exhaustive. See also
https://gitlab.haskell.org/ghc/ghc/-/wikis/pattern-synonyms#static-semantics
.
At the moment, I don't think it's possible to declare a GADT constructor
with required constraints, so a pattern synonym seems like your best bet
and fits your use case exactly.
You can put each of these pattern synonyms into a singleton COMPLETE pragma.

Hope that helps,
Sebastian


Am Sa., 27. März 2021 um 06:27 Uhr schrieb Viktor Dukhovni <
ietf-d...@dukhovni.org>:

> On Fri, Mar 26, 2021 at 07:41:09PM -0500, Alexis King wrote:
>
> > type applications in patterns are still not enough to satisfy me. I
> > provided the empty argument list example because it was simple, but I’d
> > also like this to typecheck:
> >
> > baz :: Int -> String -> Widget
> > baz = 
> >
> > bar = foo (\(a `HCons` b `HCons` HNil) -> baz a b)
> >
>
> Can you be a bit more specific on how the constraint `Blah` is presently
> defined, and how `foo` uses the HList type to execute a function of the
> appropriate arity and signature?
>
> The example below my signature typechecks, provided I use pattern
> synonyms for the GADT constructors, rather than use the constructors
> directly.
>
> --
> Viktor.
>
> {-# language DataKinds
>, FlexibleInstances
>, GADTs
>, PatternSynonyms
>, ScopedTypeVariables
>, TypeApplications
>, TypeFamilies
>, TypeOperators
>#-}
>
> import GHC.Types
> import Data.Proxy
> import Type.Reflection
> import Data.Type.Equality
>
> data HList as where
>   HNil_  :: HList '[]
>   HCons_ :: a -> HList as -> HList (a ': as)
> infixr 5 `HCons_`
>
> pattern HNil :: HList '[];
> pattern HNil = HNil_
> pattern (:^) :: a -> HList as -> HList (a ': as)
> pattern (:^) a as = HCons_ a as
> pattern (:$) a b = a :^ b :^ HNil
> infixr 5 :^
> infixr 5 :$
>
> class Typeable as => Blah as where
> params :: HList as
> instance Blah '[Int,String] where
> params = 39 :$ "abc"
>
> baz :: Int -> String -> Int
> baz i s = i + length s
>
> bar = foo (\(a :$ b) -> baz a b)
>
> foo :: Blah as => (HList as -> Int) -> Int
> foo f = f params
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern matching desugaring regression? Re: [Haskell-cafe] Why does my module take so long to compile?

2021-03-28 Thread Sebastian Graf
Hi Troels,

Sorry to hear GHC 9 didn't fix your problems! Yes, please open an issue.

Optimising for specific usage patterns might be feasible, although note
that most often it's not the exhaustivity check that is causing problems,
but the check for overlapping patterns.
At the moment the checker doesn't take shortcuts if we have
-Wincomplete-patterns, but -Wno-overlapping-patterns. Maybe it could? Let's
see what is causing you problems and decide then.

Cheers,
Sebastian

Am So., 28. März 2021 um 15:44 Uhr schrieb Troels Henriksen <
at...@sigkill.dk>:

> Troels Henriksen  writes:
>
> > It is very likely that issue 17386 is the issue.  With
> >
> > {-# OPTIONS_GHC -Wno-overlapping-patterns -Wno-incomplete-patterns
> > -Wno-incomplete-uni-patterns -Wno-incomplete-record-updates #-}
> >
> > my module(s) compile very quickly.  I'll wait and see if GHC 9 does
> > better before I try to create a smaller case (and now I at least have a
> > workaround).
>
> I have now tried it with GHC 9, and unfortunately it is still very slow.
>
> As time permits, I will see if I can come up with a self-contained
> module that illustrates the slowdown.
>
> I do have an idea for a optimisation: In all of the cases where coverage
> tracking takes a long time, I have a catch-all case at the bottom.  I
> think that is a fairly common pattern, where a program tries to detect
> various special cases, before falling back to a general case.  Perhaps
> coverage checking should have a short-circuiting check for whether there
> is an obvious catch-all case, and if so, not bother looking any closer?
>
> --
> \  Troels
> /\ Henriksen
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type inference of singular matches on GADTs

2021-03-22 Thread Sebastian Graf
Cale made me aware of the fact that the "Type applications in patterns"
proposal had already been implemented.
See https://gitlab.haskell.org/ghc/ghc/-/issues/19577 where I adapt Alexis'
use case into a test case that I'd like to see compiling.

Am Sa., 20. März 2021 um 15:45 Uhr schrieb Sebastian Graf <
sgraf1...@gmail.com>:

> Hi Alexis,
>
> The following works and will have inferred type `Int`:
>
> > bar = foo (\(HNil :: HList '[]) -> 42)
>
> I'd really like it if we could write
>
> > bar2 = foo (\(HNil @'[]) -> 42)
>
> though, even if you write out the constructor type with explicit
> constraints and forall's.
> E.g. by using a -XTypeApplications here, I specify the universal type var
> of the type constructor `HList`. I think that is a semantics that is in
> line with Type Variables in Patterns, Section 4
> <https://dl.acm.org/doi/10.1145/3242744.3242753>: The only way to satisfy
> the `as ~ '[]` constraint in the HNil pattern is to refine the type of the
> pattern match to `HList '[]`. Consequently, the local `Blah '[]` can be
> discharged and bar2 will have inferred `Int`.
>
> But that's simply not implemented at the moment, I think. I recall there's
> some work that has to happen before. The corresponding proposal seems to be
> https://ghc-proposals.readthedocs.io/en/latest/proposals/0126-type-applications-in-patterns.html
> (or https://github.com/ghc-proposals/ghc-proposals/pull/238? I'm
> confused) and your example should probably be added there as motivation.
>
> If `'[]` is never mentioned anywhere in the pattern like in the original
> example, I wouldn't expect it to type-check (or at least emit a
> pattern-match warning): First off, the type is ambiguous. It's a similar
> situation as in
> https://stackoverflow.com/questions/50159349/type-abstraction-in-ghc-haskell.
> If it was accepted and got type `Blah as => Int`, then you'd get a
> pattern-match warning, because depending on how `as` is instantiated, your
> pattern-match is incomplete. E.g., `bar3 @[Int]` would crash.
>
> Complete example code:
>
> {-# LANGUAGE DataKinds #-}
> {-# LANGUAGE TypeOperators #-}
> {-# LANGUAGE GADTs #-}
> {-# LANGUAGE LambdaCase #-}
> {-# LANGUAGE TypeApplications #-}
> {-# LANGUAGE ScopedTypeVariables #-}
> {-# LANGUAGE RankNTypes #-}
>
> module Lib where
>
> data HList as where
>   HNil  :: forall as. (as ~ '[]) => HList as
>   HCons :: forall as a as'. (as ~ (a ': as')) => a -> HList as' -> HList as
>
> class Blah as where
>   blah :: HList as
>
> instance Blah '[] where
>   blah = HNil
>
> foo :: Blah as => (HList as -> Int) -> Int
> foo f = f blah
>
> bar = foo (\(HNil :: HList '[]) -> 42) -- compiles
> bar2 = foo (\(HNil @'[]) -> 42) -- errors
>
> Cheers,
> Sebastian
>
> Am Sa., 20. März 2021 um 13:57 Uhr schrieb Viktor Dukhovni <
> ietf-d...@dukhovni.org>:
>
>> On Sat, Mar 20, 2021 at 08:13:18AM -0400, Viktor Dukhovni wrote:
>>
>> > As soon as I try add more complex contraints, I appear to need an
>> > explicit type signature for HNil, and then the code again compiles:
>>
>> But aliasing the promoted constructors via pattern synonyms, and using
>> those instead, appears to resolve the ambiguity.
>>
>> --
>> Viktor.
>>
>> {-# LANGUAGE
>> DataKinds
>>   , GADTs
>>   , PatternSynonyms
>>   , PolyKinds
>>   , ScopedTypeVariables
>>   , TypeFamilies
>>   , TypeOperators
>>   #-}
>>
>> import GHC.Types
>>
>> infixr 1 `HC`
>>
>> data HList as where
>>   HNil  :: HList '[]
>>   HCons :: a -> HList as -> HList (a ': as)
>>
>> pattern HN :: HList '[];
>> pattern HN = HNil
>> pattern HC :: a -> HList as -> HList (a ': as)
>> pattern HC a as = HCons a as
>>
>> class Nogo a where
>>
>> type family   Blah (as :: [Type]) :: Constraint
>> type instance Blah '[]= ()
>> type instance Blah (_ ': '[]) = ()
>> type instance Blah (_ ': _ ': '[]) = ()
>> type instance Blah (_ ': _ ': _ ': _) = (Nogo ())
>>
>> foo :: (Blah as) => (HList as -> Int) -> Int
>> foo _ = 42
>>
>> bar :: Int
>> bar = foo (\ HN -> 1)
>>
>> baz :: Int
>> baz = foo (\ (True `HC` HN) -> 2)
>>
>> pattern One :: Int
>> pattern One = 1
>> bam :: Int
>> bam = foo (\ (True `HC` One `HC` HN) -> 2)
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Type inference of singular matches on GADTs

2021-03-20 Thread Sebastian Graf
Hi Alexis,

The following works and will have inferred type `Int`:

> bar = foo (\(HNil :: HList '[]) -> 42)

I'd really like it if we could write

> bar2 = foo (\(HNil @'[]) -> 42)

though, even if you write out the constructor type with explicit
constraints and forall's.
E.g. by using a -XTypeApplications here, I specify the universal type var
of the type constructor `HList`. I think that is a semantics that is in
line with Type Variables in Patterns, Section 4
: The only way to satisfy
the `as ~ '[]` constraint in the HNil pattern is to refine the type of the
pattern match to `HList '[]`. Consequently, the local `Blah '[]` can be
discharged and bar2 will have inferred `Int`.

But that's simply not implemented at the moment, I think. I recall there's
some work that has to happen before. The corresponding proposal seems to be
https://ghc-proposals.readthedocs.io/en/latest/proposals/0126-type-applications-in-patterns.html
(or https://github.com/ghc-proposals/ghc-proposals/pull/238? I'm confused)
and your example should probably be added there as motivation.

If `'[]` is never mentioned anywhere in the pattern like in the original
example, I wouldn't expect it to type-check (or at least emit a
pattern-match warning): First off, the type is ambiguous. It's a similar
situation as in
https://stackoverflow.com/questions/50159349/type-abstraction-in-ghc-haskell.
If it was accepted and got type `Blah as => Int`, then you'd get a
pattern-match warning, because depending on how `as` is instantiated, your
pattern-match is incomplete. E.g., `bar3 @[Int]` would crash.

Complete example code:

{-# LANGUAGE DataKinds #-}
{-# LANGUAGE TypeOperators #-}
{-# LANGUAGE GADTs #-}
{-# LANGUAGE LambdaCase #-}
{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE RankNTypes #-}

module Lib where

data HList as where
  HNil  :: forall as. (as ~ '[]) => HList as
  HCons :: forall as a as'. (as ~ (a ': as')) => a -> HList as' -> HList as

class Blah as where
  blah :: HList as

instance Blah '[] where
  blah = HNil

foo :: Blah as => (HList as -> Int) -> Int
foo f = f blah

bar = foo (\(HNil :: HList '[]) -> 42) -- compiles
bar2 = foo (\(HNil @'[]) -> 42) -- errors

Cheers,
Sebastian

Am Sa., 20. März 2021 um 13:57 Uhr schrieb Viktor Dukhovni <
ietf-d...@dukhovni.org>:

> On Sat, Mar 20, 2021 at 08:13:18AM -0400, Viktor Dukhovni wrote:
>
> > As soon as I try add more complex contraints, I appear to need an
> > explicit type signature for HNil, and then the code again compiles:
>
> But aliasing the promoted constructors via pattern synonyms, and using
> those instead, appears to resolve the ambiguity.
>
> --
> Viktor.
>
> {-# LANGUAGE
> DataKinds
>   , GADTs
>   , PatternSynonyms
>   , PolyKinds
>   , ScopedTypeVariables
>   , TypeFamilies
>   , TypeOperators
>   #-}
>
> import GHC.Types
>
> infixr 1 `HC`
>
> data HList as where
>   HNil  :: HList '[]
>   HCons :: a -> HList as -> HList (a ': as)
>
> pattern HN :: HList '[];
> pattern HN = HNil
> pattern HC :: a -> HList as -> HList (a ': as)
> pattern HC a as = HCons a as
>
> class Nogo a where
>
> type family   Blah (as :: [Type]) :: Constraint
> type instance Blah '[]= ()
> type instance Blah (_ ': '[]) = ()
> type instance Blah (_ ': _ ': '[]) = ()
> type instance Blah (_ ': _ ': _ ': _) = (Nogo ())
>
> foo :: (Blah as) => (HList as -> Int) -> Int
> foo _ = 42
>
> bar :: Int
> bar = foo (\ HN -> 1)
>
> baz :: Int
> baz = foo (\ (True `HC` HN) -> 2)
>
> pattern One :: Int
> pattern One = 1
> bam :: Int
> bam = foo (\ (True `HC` One `HC` HN) -> 2)
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: On CI

2021-03-18 Thread Sebastian Graf
To be clear: All performance tests that run as part of CI measure
allocations only. No wall clock time.
Those measurements are (mostly) deterministic and reproducible between
compiles of the same worktree and not impacted by thermal issues/hardware
at all.

Am Do., 18. März 2021 um 18:09 Uhr schrieb davean :

> That really shouldn't be near system noise for a well constructed
> performance test. You might be seeing things like thermal issues, etc
> though - good benchmarking is a serious subject.
> Also we're not talking wall clock tests, we're talking specific metrics.
> The machines do tend to be bare metal, but many of these are entirely CPU
> performance independent, memory timing independent, etc. Well not quite but
> that's a longer discussion.
>
> The investigation of Haskell code performance is a very good thing to do
> BTW, but you'd still want to avoid regressions in the improvements you
> made. How well we can do that and the cost of it is the primary issue here.
>
> -davean
>
>
> On Wed, Mar 17, 2021 at 6:22 PM Karel Gardas 
> wrote:
>
>> On 3/17/21 4:16 PM, Andreas Klebinger wrote:
>> > Now that isn't really an issue anyway I think. The question is rather is
>> > 2% a large enough regression to worry about? 5%? 10%?
>>
>> 5-10% is still around system noise even on lightly loaded workstation.
>> Not sure if CI is not run on some shared cloud resources where it may be
>> even higher.
>>
>> I've done simple experiment of pining ghc compiling ghc-cabal and I've
>> been able to "speed" it up by 5-10% on W-2265.
>>
>> Also following this CI/performance regs discussion I'm not entirely sure
>> if  this is not just a witch-hunt hurting/beating mostly most active GHC
>> developers. Another idea may be to give up on CI doing perf reg testing
>> at all and invest saved resources into proper investigation of
>> GHC/Haskell programs performance. Not sure, if this would not be more
>> beneficial longer term.
>>
>> Just one random number thrown to the ring. Linux's perf claims that
>> nearly every second L3 cache access on the example above ends with cache
>> miss. Is it a good number or bad number? See stats below (perf stat -d
>> on ghc with +RTS -T -s -RTS').
>>
>> Good luck to anybody working on that!
>>
>> Karel
>>
>>
>> Linking utils/ghc-cabal/dist/build/tmp/ghc-cabal ...
>>   61,020,836,136 bytes allocated in the heap
>>5,229,185,608 bytes copied during GC
>>  301,742,768 bytes maximum residency (19 sample(s))
>>3,533,000 bytes maximum slop
>>  840 MiB total memory in use (0 MB lost due to fragmentation)
>>
>>  Tot time (elapsed)  Avg pause  Max
>> pause
>>   Gen  0  2012 colls, 0 par5.725s   5.731s 0.0028s
>> 0.1267s
>>   Gen  119 colls, 0 par1.695s   1.696s 0.0893s
>> 0.2636s
>>
>>   TASKS: 4 (1 bound, 3 peak workers (3 total), using -N1)
>>
>>   SPARKS: 0 (0 converted, 0 overflowed, 0 dud, 0 GC'd, 0 fizzled)
>>
>>   INITtime0.000s  (  0.000s elapsed)
>>   MUT time   27.849s  ( 32.163s elapsed)
>>   GC  time7.419s  (  7.427s elapsed)
>>   EXITtime0.000s  (  0.010s elapsed)
>>   Total   time   35.269s  ( 39.601s elapsed)
>>
>>   Alloc rate2,191,122,004 bytes per MUT second
>>
>>   Productivity  79.0% of total user, 81.2% of total elapsed
>>
>>
>>  Performance counter stats for
>> '/export/home/karel/sfw/ghc-8.10.3/bin/ghc -H32m -O -Wall -optc-Wall -O0
>> -hide-all-packages -package ghc-prim -package base -package binary
>> -package array -package transformers -package time -package containers
>> -package bytestring -package deepseq -package process -package pretty
>> -package directory -package filepath -package template-haskell -package
>> unix --make utils/ghc-cabal/Main.hs -o
>> utils/ghc-cabal/dist/build/tmp/ghc-cabal -no-user-package-db -Wall
>> -fno-warn-unused-imports -fno-warn-warnings-deprecations
>> -DCABAL_VERSION=3,4,0,0 -DBOOTSTRAPPING -odir bootstrapping -hidir
>> bootstrapping libraries/Cabal/Cabal/Distribution/Fields/Lexer.hs
>> -ilibraries/Cabal/Cabal -ilibraries/binary/src -ilibraries/filepath
>> -ilibraries/hpc -ilibraries/mtl -ilibraries/text/src
>> libraries/text/cbits/cbits.c -Ilibraries/text/include
>> -ilibraries/parsec/src +RTS -T -s -RTS':
>>
>>  39,632.99 msec task-clock#0.999 CPUs
>> utilized
>> 17,191  context-switches  #0.434 K/sec
>>
>>  0  cpu-migrations#0.000 K/sec
>>
>>899,930  page-faults   #0.023 M/sec
>>
>>177,636,979,975  cycles#4.482 GHz
>>   (87.54%)
>>181,945,795,221  instructions  #1.02  insn per
>> cycle   (87.59%)
>> 34,033,574,511  branches  #  858.718 M/sec
>>   (87.42%)
>>  1,664,969,299  branch-misses #4.89% of all
>> branches  (87.48%)
>> 41,522,737,426   

Re: On CI

2021-03-17 Thread Sebastian Graf
Re: Performance drift: I opened
https://gitlab.haskell.org/ghc/ghc/-/issues/17658 a while ago with an idea
of how to measure drift a bit better.
It's basically an automatically checked version of "Ben stares at
performance reports every two weeks and sees that T9872 has regressed by
10% since 9.0"

Maybe we can have Marge check for drift and each individual MR for
incremental perf regressions?

Sebastian

Am Mi., 17. März 2021 um 14:40 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

>
>
> On Mar 17, 2021, at 6:18 AM, Moritz Angermann 
> wrote:
>
> But what do we expect of patch authors? Right now if five people write
> patches to GHC, and each of them eventually manage to get their MRs green,
> after a long review, they finally see it assigned to marge, and then it
> starts failing? Their patch on its own was fine, but their aggregate with
> other people's code leads to regressions? So we now expect all patch
> authors together to try to figure out what happened? Figuring out why
> something regressed is hard enough, and we only have a very few people who
> are actually capable of debugging this. Thus I believe it would end up with
> Ben, Andreas, Matthiew, Simon, ... or someone else from GHC HQ anyway to
> figure out why it regressed, be it in the Review Stage, or dissecting a
> marge aggregate, or on master.
>
>
> I have previously posted against the idea of allowing Marge to accept
> regressions... but the paragraph above is sadly convincing. Maybe Simon is
> right about opening up the windows to, say, be 100% (which would catch a
> 10x regression) instead of infinite, but I'm now convinced that Marge
> should be very generous in allowing regressions -- provided we also have
> some way of monitoring drift over time.
>
> Separately, I've been concerned for some time about the peculiarity of our
> perf tests. For example, I'd be quite happy to accept a 25% regression on
> T9872c if it yielded a 1% improvement on compiling Cabal. T9872 is very
> very very strange! (Maybe if *all* the T9872 tests regressed, I'd be more
> worried.) I would be very happy to learn that some more general,
> representative tests are included in our examinations.
>
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 9.1?

2021-03-01 Thread Sebastian Graf
Hi,

I generally would like +0.1 steps, but mostly because it causes less
head-scratching to everyone new to Haskell. Basically the same argument as
Richard says.

I can't comment on how far head.hackage (or any tool relies) on odd version
numbers, I certainly never have. Given that it's all overlays (over which
we have complete control), does it really matter anyway? When would we say
<=9.1 rather than <=9.2? Shouldn't 9.1 at one point become binary
compatible with 9.2, as if it really was "9.2.-1" (according to the PVP,
9.2.0 is actually > 9.2, so that won't work)? I think there are multiple
ways in which we could avoid using 9.1 as the namespace for "somewhere
between 9.0 and 9.2 exclusively". We have alpha releases, so why don't we
name it 9.1.nightly?

> majormajor.odd.time stamp

TBH, I found the fact that the *configure* date (I think?) is embedded in
the version rather annoying. I sometimes have two checkouts configured at
different dates but branching off from the same base commit, so I'm pretty
sure that interface files are compatible. Yet when I try to run one
compiler on the package database of the other (because I might have copied
a compiler invocation from stdout that contained an absolute path), I get
an error due to the interface file version mismatch. I'd rather have a
crash or undefined behavior than a check based on the configure date,
especially since I'm just debugging anyway.
I do get why we want to embed it for release management purposes, though.

Cheers,
Sebastian

Am Di., 2. März 2021 um 05:45 Uhr schrieb Carter Schonwald <
carter.schonw...@gmail.com>:

> It makes determining if a ghc build was a dev build vs a tagged release
> much easier.  Odd == I’m using a dev build because it reports a version
> like majormajor.odd.time stamp right ? — we still donthat with dev /master
> right?
>
> At some level any versioning notation is a social convention, and this one
> does have a good advantage of making dev builds apparent while letting
> things like hackage head have coherent versioning for treating these
> releases sanely?
>
> Otoh. It’s all a social construct. So any approach that helps all relevant
> communities is always welcome.  Though even numbers are nice ;)
>
> On Mon, Mar 1, 2021 at 11:30 PM Richard Eisenberg 
> wrote:
>
>> Hi devs,
>>
>> I understand that GHC uses the same version numbering system as the Linux
>> kernel did until 2003(*), using odd numbers for unstable "releases" and
>> even ones for stable ones. I have seen this become a point of confusion, as
>> in: "Quick Look just missed the cutoff for GHC 9.0, so it will be out in
>> GHC 9.2" "Um, what about 9.1?"
>>
>> Is there a reason to keep this practice? Linux moved away from it 18
>> years ago and seems to have thrived despite. Giving this convention up on a
>> new first-number change (the change from 8 to 9) seems like a good time.
>>
>> I don't feel strongly about this, at all -- just asking a question that
>> maybe no one has asked in a long time.
>>
>> Richard
>>
>> (*) I actually didn't know that Linux stopped doing this until writing
>> this email, wondering why we needed to tie ourselves to Linux. I
>> coincidentally stopped using Linux full-time (and thus administering my own
>> installation) in 2003, when I graduated from university.
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: On CI

2021-02-19 Thread Sebastian Graf
Recompilation avoidance

I think in order to cache more in CI, we first have to invest some time in
fixing recompilation avoidance in our bootstrapped build system.

I just tested on a hadrian perf ticky build: Adding one line of *comment*
in the compiler causes

   - a (pretty slow, yet negligible) rebuild of the stage1 compiler
   - 2 minutes of RTS rebuilding (Why do we have to rebuild the RTS? It
   doesn't depend in any way on the change I made)
   - apparent full rebuild the libraries
   - apparent full rebuild of the stage2 compiler

That took 17 minutes, a full build takes ~45minutes. So there definitely is
some caching going on, but not nearly as much as there could be.
I know there have been great and boring efforts on compiler determinism in
the past, but either it's not good enough or our build system needs fixing.
I think a good first step to assert would be to make sure that the hash of
the stage1 compiler executable doesn't change if I only change a comment.
I'm aware there probably is stuff going on, like embedding configure dates
in interface files and executables, that would need to go, but if possible
this would be a huge improvement.

On the other hand, we can simply tack on a [skip ci] to the commit message,
as I did for https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4975.
Variants like [skip tests] or [frontend] could help to identify which tests
to run by default.

Lean

I had a chat with a colleague about how they do CI for Lean. Apparently, CI
turnaround time including tests is generally 25 minutes (~15 minutes for
the build) for a complete pipeline, testing 6 different OSes and
configurations in parallel:
https://github.com/leanprover/lean4/actions/workflows/ci.yml
They utilise ccache to cache the clang-based C++-backend, so that they only
have to re-run the front- and middle-end. In effect, they take advantage of
the fact that the "function" clang, in contrast to the "function" stage1
compiler, stays the same.
It's hard to achieve that for GHC, where a complete compiler pipeline comes
as one big, fused "function": An external tool can never be certain that a
change to Parser.y could not affect the CodeGen phase.

Inspired by Lean, the following is a bit inconcrete and imaginary, but
maybe we could make it so that compiler phases "sign" parts of the
interface file with the binary hash of the respective subcomponents of the
phase?
E.g., if all the object files that influence CodeGen (that will later be
linked into the stage1 compiler) result in a hash of 0xdeadbeef before and
after the change to Parser.y, we know we can stop recompiling Data.List
with the stage1 compiler when we see that the IR passed to CodeGen didn't
change, because the last compile did CodeGen with a stage1 compiler with
the same hash 0xdeadbeef. The 0xdeadbeef hash is a proxy for saying "the
function CodeGen stayed the same", so we can reuse its cached outputs.
Of course, that is utopic without a tool that does the "taint analysis" of
which modules in GHC influence CodeGen. Probably just including all the
transitive dependencies of GHC.CmmToAsm suffices, but probably that's too
crude already. For another example, a change to GHC.Utils.Unique would
probably entail a full rebuild of the compiler because it basically affects
all compiler phases.
There are probably parallels with recompilation avoidance in a language
with staged meta-programming.

Am Fr., 19. Feb. 2021 um 11:42 Uhr schrieb Josef Svenningsson via ghc-devs <
ghc-devs@haskell.org>:

> Doing "optimistic caching" like you suggest sounds very promising. A way
> to regain more robustness would be as follows.
> If the build fails while building the libraries or the stage2 compiler,
> this might be a false negative due to the optimistic caching. Therefore,
> evict the "optimistic caches" and restart building the libraries. That way
> we can validate that the build failure was a true build failure and not
> just due to the aggressive caching scheme.
>
> Just my 2p
>
> Josef
>
> --
> *From:* ghc-devs  on behalf of Simon Peyton
> Jones via ghc-devs 
> *Sent:* Friday, February 19, 2021 8:57 AM
> *To:* John Ericson ; ghc-devs <
> ghc-devs@haskell.org>
> *Subject:* RE: On CI
>
>
>1. Building and testing happen together. When tests failure
>spuriously, we also have to rebuild GHC in addition to re-running the
>tests. That's pure waste.
>https://gitlab.haskell.org/ghc/ghc/-/issues/13897
>
> 
>tracks this more or less.
>
> I don’t get this.  We have to build GHC before we can test it, don’t we?
>
> 2 .  We don't cache between jobs.
>
> 

Re: On CI

2021-02-17 Thread Sebastian Graf
Hi Moritz,

I, too, had my gripes with CI turnaround times in the past. Here's a
somewhat radical proposal:

   - Run "full-build" stage builds only on Marge MRs. Then we can assign to
   Marge much earlier, but probably have to do a bit more of (manual)
   bisecting of spoiled Marge batches.
  - I hope this gets rid of a bit of the friction of small MRs. I
  recently caught myself wanting to do a bunch of small, independent, but
  related changes as part of the same MR, simply because it's such a hassle
  to post them in individual MRs right now and also because it
steals so much
  CI capacity.
   - Regular MRs should still have the ability to easily run individual
   builds of what is now the "full-build" stage, similar to how we can run
   optional "hackage" builds today. This is probably useful to pin down the
   reason for a spoiled Marge batch.
   - The CI capacity we free up can probably be used to run a perf build
   (such as the fedora release build) on the "build" stage (the one where we
   currently run stack-hadrian-build and the validate-deb9-hadrian build), in
   parallel.
   - If we decide against the latter, a micro-optimisation could be to
   cache the build artifacts of the "lint-base" build and continue the build
   in the validate-deb9-hadrian build of the "build" stage.

The usefulness of this approach depends on how many MRs cause metric
changes on different architectures.

Another frustrating aspect is that if you want to merge an n-sized chain of
dependent changes individually, you have to

   - Open an MR for each change (initially the last change will be
   comprised of n commits)
   - Review first change, turn pipeline green   (A)
   - Assign to Marge, wait for batch to be merged   (B)
   - Review second change, turn pipeline green
   - Assign to Marge, wait for batch to be merged
   - ... and so on ...

Note that (A) incurs many context switches for the dev and the latency of
*at least* one run of CI.
And then (B) incurs the latency of *at least* one full-build, if you're
lucky and the batch succeeds. I've recently seen batches that were
resubmitted by Marge at least 5 times due to spurious CI failures and
timeouts. I think this is a huge factor for latency.

Although after (A), I should just pop the the patch off my mental stack,
that isn't particularly true, because Marge keeps on reminding me when a
stack fails or succeeds, both of which require at least some attention from
me: Failed 2 times => Make sure it was spurious, Succeeds => Rebase next
change.

Maybe we can also learn from other projects like Rust, GCC or clang, which
I haven't had a look at yet.

Cheers,
Sebastian

Am Mi., 17. Feb. 2021 um 09:11 Uhr schrieb Moritz Angermann <
moritz.angerm...@gmail.com>:

> Friends,
>
> I've been looking at CI recently again, as I was facing CI turnaround
> times of 9-12hs; and this just keeps dragging out and making progress hard.
>
> The pending pipeline currently has 2 darwin, and 15 windows builds
> waiting. Windows builds on average take ~220minutes. We have five builders,
> so we can expect this queue to be done in ~660 minutes assuming perfect
> scheduling and good performance. That is 11hs! The next windows build can
> be started in 11hs. Please check my math and tell me I'm wrong!
>
> If you submit a MR today, with some luck, you'll be able to know if it
> will be mergeable some time tomorrow. At which point you can assign it to
> marge, and marge, if you are lucky and the set of patches she tries to
> merge together is mergeable, will merge you work into master probably some
> time on Friday. If a job fails, well you have to start over again.
>
> What are our options here? Ben has been pretty clear about not wanting a
> broken commit for windows to end up in the tree, and I'm there with him.
>
> Cheers,
>  Moritz
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pattern matching desugaring regression? Re: [Haskell-cafe] Why does my module take so long to compile?

2021-02-15 Thread Sebastian Graf
Hi,

I'm not sure I see all the context of the conversation, but it is entirely
possible that code with many local constraints regresses the pattern-match
checker (which is accounted to Desugaring in the profile emitted by -v2),
I'm afraid. That simply has to do with the fact that we now actually care
about them, previously they were mostly discarded.

I'd be glad if you submitted a relatively isolated reproducer of what is
fast with 8.8 and slow with 8.10 (even better 9.0).
I hope that things have improved since we fixed
https://gitlab.haskell.org/ghc/ghc/-/issues/17836, which is part of 9.0 but
not of 8.10.

Cheers,
Sebastian

Am Mo., 15. Feb. 2021 um 19:04 Uhr schrieb Troels Henriksen <
at...@sigkill.dk>:

> Carter Schonwald  writes:
>
> > Ccing ghc devs since that’s a better forum perhaps
> > Crazy theory:
> >
> > this is a regression due the the partial changes to pattern matching
> > coverage checking in 8.10 that finished / landed in ghc 9
> >
> > Why:
> > Desugaring is when pattern/ case statement translation happens I think?
> > And the only obvious “big thing” is that you have some huge , albeit sane
> > for a compiler, pattern matching
> >
> > I’d first check if the new ghc 9 release doesn’t have that regression in
> > build time that you experienced.  And if it does file a ticket.
> >
> > I may be totally wrong, but that seems like a decent likelihood !
>
> You may be right!  Another module that regressed is also mainly
> characterised by large-but-not-insane case expressions:
>
> https://github.com/diku-dk/futhark/blob/d0839412bdd11884d75a1494dd5de5191833f39e/src/Futhark/Optimise/Simplify/Rules.hs
>
> I'll try to split these modules up a little bit (I should have done so a
> while ago anyway) and maybe that will make the picture even clearer.
>
> --
> \  Troels
> /\ Henriksen
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Plan for GHC 9.2

2021-02-11 Thread Sebastian Graf
Hi,

Since my hopes of finally merging Nested CPR have recently been crushed
again, I hope that we can include the implementation of the
UnliftedDatatypes extension (proposal
,
implementation ).
It was on ice since it depends on the BoxedRep proposal, but if BoxedRep is
going to make it, surely UnliftedDatatypes can make it, too.
I expect quite a few bugs, simply because I don't have much code to test it
on yet. But I'm very confident that existing code isn't impacted by that,
as most of the functionality (CodeGen for unlifted types, most importantly)
was already there and I only had to refine a conditional here and there.

Cheers,
Sebastian

Am Mi., 10. Feb. 2021 um 18:42 Uhr schrieb Ben Gamari :

> Roland Senn  writes:
>
> > I hope ticket #19157 will make it in the GHC 9.2 release. In the GHCi
> > debugger it adds the possibility to set ignore counts to breakpoints.
> > The next  times the break point is reached the program's
> > execution does not stop. This feature is available in nearly every
> > debugger, but until now not yet in the GHCi debugger.
> > Merge request !4839 is ready for review  (and it's NOT rocket
> > science...)
> >
> Indeed, this seems quite reasonable. I don't see any reason why we
> shouldn't be able to get it in to 9.2.1.
>
> Cheers,
>
> - Ben
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC Reading Guide

2021-02-05 Thread Sebastian Graf
Hi Takenobu,

thanks for updating that resource! I know that it was helpful to a couple
of students of mine to get a big picture of GHC.

I don't have anything to add. There are quite a few more -ddump-* flags for
the different Core passes, but I don't think it's interesting to list all
of them on that slide.

Greetings,
Sebastian

Am Fr., 5. Feb. 2021 um 16:09 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

> A quick scroll through didn't reveal anything incorrect in my areas of
> expertise.
>
> On the "Dump intermediate languages" slide, you might want to mention
> these flags:
>  -fprint-explicit-kinds: print out kind applications
>  -fprint-explicit-coercions: print out details of coercions
>  -fprint-typechecker-elaboration: print out extra gubbins the type-checker
> inserts
>  -fprint-explicit-runtime-reps: don't simplify away RuntimeRep arguments
>
>  -ddump-ds-preopt: print out the desugared Core before the very first
> "simple" optimization pass
>
> Thanks for writing & sharing!
> Richard
>
> > On Feb 5, 2021, at 7:05 AM, Takenobu Tani  wrote:
> >
> > Dear devs,
> >
> > I've written a simple document about "GHC source reading" for myself
> > and potential newcomers:
> >
> >  * https://takenobu-hs.github.io/downloads/haskell_ghc_reading_guide.pdf
> >(https://github.com/takenobu-hs/haskell-ghc-reading-guide)
> >
> > Please teach me if something's wrong. I'll learn and correct them.
> >
> > Happy Haskelling :)
> >
> > Regards,
> > Takenobu
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Plan for GHC 9.2

2021-02-04 Thread Sebastian Graf
Hi Ben,

Since part of the changes of
https://gitlab.haskell.org/ghc/ghc/-/issues/14422 are already merged into
master (e.g. we ignore the "type signature" part of a COMPLETE sig now,
because there is nothing to disambiguate), it would be good if we merged
the solution outlined in
https://gitlab.haskell.org/ghc/ghc/-/issues/14422#note_321645, as that
would allow users to switch to a new, better mechanism instead of
discovering that COMPLETE signatures seemingly have been ripped of a
feature.
The problem with that is that it needs a GHC proposal, I think, and that's
not written yet.

Also I hope to merge some efforts in the CPR area before the fork. But
that's quite optional.

Cheers,
Sebastian

Am Do., 4. Feb. 2021 um 19:56 Uhr schrieb Ben Gamari :

>
> tl;dr. Provisional release schedule for 9.2 enclosed. Please discuss,
>especially if you have something you would like merged for 9.2.1.
>
> Hello all,
>
> With GHC 9.0.1 at long-last out the door, it is time that we start
> turning attention to GHC 9.2. I would like to avoid making the mistake
> made in the 9.0 series in starting the fork in a state that required a
> significant amount of backporting to be releaseable. Consequently, I
> want to make sure that we have a fork schedule that is realistic given
> the things that need to be merged for 9.2. These include:
>
>  * Update haddock submodule in `master` (Ben)
>  * Bumping bytestring to 0.11 (#19091, Ben)
>  * Finishing the rework of sized integer primops (#19026, John Ericson)
>  * Merge of ghc-exactprint into GHC? (Alan Zimmerman, Henry)
>  * Merge BoxedRep (#17526, Ben)
>  * ARM NCG backend and further stabilize Apple ARM support? (Moritz)
>  * Some form of coercion zapping (Ben, Simon, Richard)
>  * Tag inference analysis and tag check elision (Andreas)
>
> If you see something that you would like to see in 9.2.1 please do
> holler. Otherwise, if you see your name in this list it would be great
> if you could let me know when you think your project may be in a
> mergeable state.
>
> Ideally we would strive for a schedule like the following:
>
> 4 February 2021:   We are here
>~4 weeks pass
> 3 March 2021:  Release branch forked
>1 week passes
> 10 March 2021: Alpha 1 released
>3 weeks pass
> 31 March 2021: Alpha 2 released
>2 weeks pass
> 14 April 2021: Alpha 3 released
>2 weeks pass
> 28 April 2021: Alpha 4 released
>1 week passes
> 5 May 2021:Beta 1 released
>1 week passes
> 12 May 2021:   Release candidate 1 released
>2 weeks pass
> 26 May 2021:   Final release
>
> This provides ample time for stabilization while avoiding deviation from
> the usual May release timeframe. However, this would require that we
> move aggressively to start getting the tree into shape since the fork
> would be less than four weeks away. I would appreciate contributors'
> thoughts on the viability of this timeline.
>
> Cheers,
>
> - Ben
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Benchmarking experiences: Cabal test vs compiling nofib/spectral/simple/Main.hs

2021-01-23 Thread Sebastian Graf
Hi Andreas,

I similarly benchmark compiler performance by compiling Cabal, but only
occasionally. I mostly trust ghc/alloc metrics in CI and check Cabal when I
think there's something afoot and/or want to measure runtime, not only
allocations.

I'm inclined to think that for my purposes (testing the impact of
optimisations) the GHC codebase offers sufficient variety to turn up
fundamental regressions, but maybe it makes sense to build some packages
from head.hackage to detect regressions like
https://gitlab.haskell.org/ghc/ghc/-/issues/19203 earlier. It's all a bit
open-ended and I frankly think I wouldn't get done anything if all my
patches would have to get to the bottom of all regressions and improvements
on the entire head.hackage set. I somewhat trust that users will complain
eventually and file a bug report and that our CI efforts mean that compiler
performance will improve in the mean.

Although it's probably more of a tooling problem: I simply don't know how
to collect the compiler performance metrics for arbitrary cabal packages.
If these metrics would be collected as part of CI, maybe as a nightly or
weekly job, it would be easier to get to the bottom of a regression before
it manifests in a released GHC version. But it all depends on how easy that
would be to set up and how many CI cycles it would burn, and I certainly
don't feel like I'm in a position to answer either question.

Cheers,
Sebastian


Am Mi., 20. Jan. 2021 um 15:28 Uhr schrieb Andreas Klebinger <
klebinger.andr...@gmx.at>:

> Hello Devs,
>
> When I started to work on GHC a few years back the Wiki recommended
> using nofib/spectral/simple/Main.hs as
> a test case for compiler performance changes. I've been using this ever
> since.
>
> "Recently" the cabal-test (compiling cabal-the-library) has become sort
> of a default benchmark for GHC performance.
> I've used the Cabal test as well and it's probably a better test case
> than nofib/spectral/simple/Main.hs.
> I've started using both usually using spectral/simple to benchmark
> intermediate changes and then looking
> at the cabal test for the final patch at the end. So far I have rarely
> seen a large
> difference between using cabal or spectral/simple. Sometimes the
> magnitude of the effect was different
> between the two, but I've never seen one regress/improve while the other
> didn't.
>
> Since the topic came up recently in a discussion I wonder if others use
> similar means to quickly bench ghc changes
> and what your experiences were in terms of simpler benchmarks being
> representative compared to the cabal test.
>
> Cheers,
> Andreas
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: presentation: Next-gen Haskell Compilation Techniques

2021-01-10 Thread Sebastian Graf
Hi Csaba,

Thanks for your presentation, that's a nice high-level overview of what
you're up to.

A few thoughts:

   - Whole-program optimization sounds great, but also very ambitious,
   given the amount of code GHC generates today. I'd be amazed to see advances
   in that area, though, and your >100-module CFA performance incites hope!
   - I wonder if going through GRIN results in a more efficient mapping to
   hardware. I recently found that the code GHC generates is dominated by
   administrative traffic from and to the heap [1]. I suspect that you can
   have big wins here if you manage to convey better call stack, heap and
   alias information to LLVM.
   - The Control Analysis+specialisation approach sounds pretty similar to
   doing Constructor Specialisation [2] for Lambdas (cf. 6.2) if you also
   inline the function for which you specialise afterwards. I sunk many hours
   into making that work reliably, fast and without code bloat in the past, to
   no avail. Frankly, if you can do it in GRIN, I don't see why we couldn't do
   it in Core. But maybe we can learn from the GRIN implementation afterwards
   and maybe rethink SpecConstr. Maybe the key is not to inline the function
   for which we specialise? But then you don't gain that much...
   - I follow the Counting Immutable Beans [3] stuff quite closely
   (Sebastian is a colleague of mine) and hope that it is applicable to
   Haskell some day. But I think using Perceus, like any purely RC-based
   memory management scheme, means that you can't have cycles in your heap, so
   no loopy thunks (such as constant-space `ones = 1:ones`) and mutability. I
   think that makes a pretty huge difference for many use cases. Sebastian
   also told me that they have to adapt their solutions to the cycle
   restriction from time to time, so far always successfully. But it comes at
   a cost: You have to adapt the code you want to write into a form that works.

I only read the slides, apologies if some of my points were invalidated by
something you said.

Keep up the good work!
Cheers,
Sebastian

[1] https://gitlab.haskell.org/ghc/ghc/-/issues/19113
[2]
https://www.microsoft.com/en-us/research/wp-content/uploads/2016/07/spec-constr.pdf
[3] https://arxiv.org/abs/1908.05647

Am So., 10. Jan. 2021 um 00:31 Uhr schrieb Csaba Hruska <
csaba.hru...@gmail.com>:

> Hello,
> I did an online presentation about Haskell related (futuristic)
> compilation techniques.
> The application of these methods is also the main motivation of my work
> with the grin compiler project and ghc-wpc.
>
> video: https://www.youtube.com/watch?v=jyaR8E325ok
> slides:
> https://docs.google.com/presentation/d/1g_-bHgeD7lV4AYybnvjgkWa9GKuP6QFUyd26zpqXssQ/edit?usp=sharing
>
> Regards,
> Csaba
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: can't get raw logs -- server down?

2020-12-24 Thread Sebastian Graf
Hi Richard,

I can access the log just fine. In case you still can't, here it is (at
least for a few days):
https://1drv.ms/u/s!AvWV1ZpCBdeckIF5-Stx7QZ0gWiUKg?e=NYjO2H

Hope that helps.

Merry Christmas :)
Sebastian

Am Do., 24. Dez. 2020 um 17:38 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

> Hi devs,
>
> I have a failing CI run at
> https://gitlab.haskell.org/ghc/ghc/-/jobs/533674. I can't repro the
> problem locally, so I wanted to see the raw log. But clicking the raw log
> button (leading to https://gitlab.haskell.org/ghc/ghc/-/jobs/533674/raw)
> gets nothing. It looks like a server is down.
>
> Help?
>
> Thanks!
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Nested constructed product results?

2020-12-15 Thread Sebastian Graf
Hi Alexis,

that's a very interesting example you have there!

So far, what we referred to as Nested CPR concerned unboxing for returned
nested *records*, e.g., the `annotation` field in your example. That's what
I try to exploit in !1866
, which after a
rebase that I'll hopefully be doing this week, some more sleuthing and then
documenting what I did will finally make it into master.

CPR'ing the Lambda, i.e., what is returned for `parser`, on the other hand,
is a surprising new opportunity for what Nested CPR could do beyond
unboxing records! And it's pretty simple, too: Because it's a function, we
don't care about subtleties such as whether all callers actually evaluate
the pair that deep (actually, that's wrong, as I realise below). I think
it's entirely within the reach of !1866 today. So we could transform
(provided that `(,) <$> a <*> b` inlines `<$>` and `<*>` and then will
actually have the CPR property)

AnnotatedParser ann1 f <+> AnnotatedParser ann2 g = AnnotatedParser
  { annotation = Seq ann1 ann2
  , parser = \s1 ->
  let !(a, s2) = f s1
  !(b, s3) = g s2
  in ((,) <$> a <*> b, s3)
  }

to

$w<+> :: Annotation
  -> (String -> (Maybe a, String))
  -> Annotation
  -> (String -> (Maybe b, String))
  -> (# Annotation, String -> (# Maybe (a, b), String #) #)
$w<+> ann1 f ann2 g =
  (# Seq ann1 ann2
   , \s1 -> case (\s1 -> let !(a, s2) = f s1
!(b, s3) = g s2
in ((,) <$> a <*> b) s1 of (p, q) -> (#p, q#), s3) #)

<+> :: AnnotatedParser a -> AnnotatedParser b -> AnnotatedParser (a, b)
<+> (AnnotatedParser ann1 f) (AnnotatedParser ann2 g) =
  case $w<+> ann1 f ann2 g of
(# a, b #) -> AnnotatedParser (\s1 -> case a s1 of (# p, q#) -> (p,
q)) b
{-# INLINE <+> #-}

Actually writing out the transformation tells me that this isn't always a
win: We now have to allocate a lambda in the wrapper. That is only a win if
that lambda cancels away at call sites! So we have to make sure that all
call sites of the wrapper actually call the `parser`, so that the lambda
simplifies away. If it doesn't, we have a situation akin to reboxing. So I
was wrong above when I said "we don't care about subtleties such as whether
all callers actually evaluate the pair that deep": We very much need to
know whether all call sites call the lambda. Luckily, I implemented just
that  for
exploitation by Nested CPR! That's the reason why I need to rebase !1866
now. I'll ḱeep you posted.

---

You might wonder why CPR today doesn't care for lambdas. Well, they only
make sense in nested scenarios (otherwise the function wasn't eta-expanded
that far, for good reasons) and CPR currently doesn't bother unboxing
records nestedly, which is what #18174
 discusses and what
!1866 tries to fix.

Cheers,
Sebastian

Am Di., 15. Dez. 2020 um 06:52 Uhr schrieb Alexis King <
lexi.lam...@gmail.com>:

> Hi all,
>
> I spent some time today looking into the performance of a program
> involving a parser type that looks something like this:
>
> data AnnotatedParser a = AnnotatedParser
>   { annotation :: Annotation
>   , parser :: String -> (Maybe a, String)
>   }
>
> The `Annotation` records metadata about the structure of an
> `AnnotatedParser` that can be accessed statically (that is, without
> having to run the parser on some input). `AnnotatedParser`s are built
> from various primitive constructors and composed using various
> combinators. These combinators end up looking something like this:
>
> (<+>) :: AnnotatedParser a -> AnnotatedParser b -> AnnotatedParser (a,
> b)
> AnnotatedParser ann1 f <+> AnnotatedParser ann2 g = AnnotatedParser
>   { annotation = Seq ann1 ann2
>   , parser = \s1 ->
>   let !(a, s2) = f s1
>   !(b, s3) = g s2
>   in ((,) <$> a <*> b, s3)
>   }
>
> Use of these combinators leads to the construction and subsequent case
> analysis of numerous `AnnotatedParser` closures. Happily, constructed
> product result[1] analysis kicks in and rewrites such combinators to cut
> down on the needless boxing, leading to worker/wrapper splits like this:
>
> $w<+> :: Annotation
>   -> (String -> (Maybe a, String))
>   -> Annotation
>   -> (String -> (Maybe b, String))
>   -> (# Annotation, String -> (Maybe (a, b), String) #)
> $w<+> ann1 f ann2 g =
>   (# Seq ann1 ann2
>, \s1 -> let !(a, s2) = f s1
> !(b, s3) = g s2
> in ((,) <$> a <*> b, s3) #)
>
> <+> :: AnnotatedParser a -> AnnotatedParser b -> AnnotatedParser (a, b)
> <+> (AnnotatedParser ann1 f) (AnnotatedParser ann2 g) =
>   case $w<+> ann1 f ann2 g of
> (# a, b #) -> AnnotatedParser a b
> {-# INLINE 

Re: Hadrian: Error "cannot find -lHSrts-1.0_thr_l" when linking ghc executable.

2020-11-25 Thread Sebastian Graf
Oh, nevermind. See the comments on
https://gitlab.haskell.org/ghc/ghc/-/commit/fc644b1a643128041cfec25db84e417851e28bab.
Apparently, master really is broken in devel2 flavour!

Am Mi., 25. Nov. 2020 um 10:38 Uhr schrieb Sebastian Graf <
sgraf1...@gmail.com>:

> Hi Roland,
>
> Since you mention an RTS variant with an _l suffix (for eventlog), I
> suspect that you may have to (reboot? and) reconfigure after
> https://gitlab.haskell.org/ghc/ghc/-/issues/18948 and
> https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4448.
>
> Does that help?
>
> Cheers,
> Sebastian
>
> Am Mi., 25. Nov. 2020 um 10:33 Uhr schrieb Roland Senn :
>
>> Hi there!
>>
>> I have two GHC trees. One is called ghcdebug and is a little bit
>> outdated (16. September 2020). In this tree, I can work normally:
>> Change code, compile and run the resulting GHC compiler.
>>
>> The second tree is up-to-date! Compiling works fine, however the linker
>> steps for the ghc executable fails. When I restart the failing link
>> step, I get the following error:
>>
>> $ hadrian/build --flavour=devel2 --freeze1 -j2 stage2:exe:ghc-bin
>> Up to date
>> | Run Ghc LinkHs Stage1: _build/stage1/ghc/build/c/hschooks.o (and 7
>> more) => _build/stage1/bin/ghc
>> /usr/bin/ld.gold: error: cannot find -lHSrts-1.0_thr_l
>> _build/stage1/ghc/build/GHCi/Leak.o(.text+0xd5): error: undefined
>> reference to 'stg_upd_frame_info'
>> _build/stage1/ghc/build/GHCi/Leak.o(.text+0x1c2): error: undefined
>> reference to 'stg_upd_frame_info'
>> _build/stage1/ghc/build/GHCi/Leak.o(.text+0x1e6): error: undefined
>> reference to 'stg_ap_ppp_info'
>>   ...  and many many more lines with "error: undefined
>> reference to ..."
>>
>> I get the same error when I use a freshly cloned ghc tree. I also get
>> the error independently of the GHC version used to compile (8.8.3 or
>> 8.10.2). I didn't modify any settings files.
>>
>> I do:
>>hadrian/build clean
>>hadrian/build --flavour=devel2 -j2
>>
>> I'm using a normal x68_64 box with plain vanilla Linux Debian 10. No
>> Docker and no Nix.
>>
>> The directory _build/stage/rts/build contains:
>>
>> roland@goms:~/Projekte/ghc$ ls -all _build/stage1/rts/build/
>> insgesamt 19396
>> drwxr-xr-x 5 roland roland 4096 Nov 24 17:58 .
>> drwxr-xr-x 3 roland roland 4096 Nov 24 17:56 ..
>> drwxr-xr-x 2 roland roland 4096 Nov 24 17:56 autogen
>> drwxr-xr-x 7 roland roland12288 Nov 24 17:58 c
>> drwxr-xr-x 2 roland roland 4096 Nov 24 17:58 cmm
>> -rw-r--r-- 1 roland roland31349 Nov 24 17:58 DerivedConstants.h
>> -rw-r--r-- 1 roland roland13764 Nov 24 17:57 ffi.h
>> -rw-r--r-- 1 roland roland 4343 Nov 24 17:57 ffitarget.h
>> -rw-r--r-- 1 roland roland15128 Nov 24 17:58 ghcautoconf.h
>> -rw-r--r-- 1 roland roland  618 Nov 24 17:58 ghcplatform.h
>> -rw-r--r-- 1 roland roland  732 Nov 24 17:58 ghcversion.h
>> -rw-r--r-- 1 roland roland82486 Nov 24 17:57 libCffi.a
>> lrwxrwxrwx 1 roland roland9 Nov 24 17:57 libCffi_thr.a ->
>> libCffi.a
>> -rw-r--r-- 1 roland roland  9166966 Nov 24 17:58 libHSrts-1.0.a
>> -rw-r--r-- 1 roland roland 10490730 Nov 24 17:58 libHSrts-1.0_thr.a
>>
>> Any ideas what's wrong or how to fix?
>>
>> Many thanks and Cheers,
>>Roland
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hadrian: Error "cannot find -lHSrts-1.0_thr_l" when linking ghc executable.

2020-11-25 Thread Sebastian Graf
Hi Roland,

Since you mention an RTS variant with an _l suffix (for eventlog), I
suspect that you may have to (reboot? and) reconfigure after
https://gitlab.haskell.org/ghc/ghc/-/issues/18948 and
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/4448.

Does that help?

Cheers,
Sebastian

Am Mi., 25. Nov. 2020 um 10:33 Uhr schrieb Roland Senn :

> Hi there!
>
> I have two GHC trees. One is called ghcdebug and is a little bit
> outdated (16. September 2020). In this tree, I can work normally:
> Change code, compile and run the resulting GHC compiler.
>
> The second tree is up-to-date! Compiling works fine, however the linker
> steps for the ghc executable fails. When I restart the failing link
> step, I get the following error:
>
> $ hadrian/build --flavour=devel2 --freeze1 -j2 stage2:exe:ghc-bin
> Up to date
> | Run Ghc LinkHs Stage1: _build/stage1/ghc/build/c/hschooks.o (and 7
> more) => _build/stage1/bin/ghc
> /usr/bin/ld.gold: error: cannot find -lHSrts-1.0_thr_l
> _build/stage1/ghc/build/GHCi/Leak.o(.text+0xd5): error: undefined
> reference to 'stg_upd_frame_info'
> _build/stage1/ghc/build/GHCi/Leak.o(.text+0x1c2): error: undefined
> reference to 'stg_upd_frame_info'
> _build/stage1/ghc/build/GHCi/Leak.o(.text+0x1e6): error: undefined
> reference to 'stg_ap_ppp_info'
>   ...  and many many more lines with "error: undefined
> reference to ..."
>
> I get the same error when I use a freshly cloned ghc tree. I also get
> the error independently of the GHC version used to compile (8.8.3 or
> 8.10.2). I didn't modify any settings files.
>
> I do:
>hadrian/build clean
>hadrian/build --flavour=devel2 -j2
>
> I'm using a normal x68_64 box with plain vanilla Linux Debian 10. No
> Docker and no Nix.
>
> The directory _build/stage/rts/build contains:
>
> roland@goms:~/Projekte/ghc$ ls -all _build/stage1/rts/build/
> insgesamt 19396
> drwxr-xr-x 5 roland roland 4096 Nov 24 17:58 .
> drwxr-xr-x 3 roland roland 4096 Nov 24 17:56 ..
> drwxr-xr-x 2 roland roland 4096 Nov 24 17:56 autogen
> drwxr-xr-x 7 roland roland12288 Nov 24 17:58 c
> drwxr-xr-x 2 roland roland 4096 Nov 24 17:58 cmm
> -rw-r--r-- 1 roland roland31349 Nov 24 17:58 DerivedConstants.h
> -rw-r--r-- 1 roland roland13764 Nov 24 17:57 ffi.h
> -rw-r--r-- 1 roland roland 4343 Nov 24 17:57 ffitarget.h
> -rw-r--r-- 1 roland roland15128 Nov 24 17:58 ghcautoconf.h
> -rw-r--r-- 1 roland roland  618 Nov 24 17:58 ghcplatform.h
> -rw-r--r-- 1 roland roland  732 Nov 24 17:58 ghcversion.h
> -rw-r--r-- 1 roland roland82486 Nov 24 17:57 libCffi.a
> lrwxrwxrwx 1 roland roland9 Nov 24 17:57 libCffi_thr.a ->
> libCffi.a
> -rw-r--r-- 1 roland roland  9166966 Nov 24 17:58 libHSrts-1.0.a
> -rw-r--r-- 1 roland roland 10490730 Nov 24 17:58 libHSrts-1.0_thr.a
>
> Any ideas what's wrong or how to fix?
>
> Many thanks and Cheers,
>Roland
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Restricted sums in BoxedRep

2020-10-14 Thread Sebastian Graf
I believe Simon told me once that NULL pointers in places we assume
BoxedRep things are not an option, because the GC assumes it is free to
follow that pointer. It won't check if it's NULL or not.
That's also the reason why we lower `LitRubbish` (which we use for absent
BoxedRep literals) as `()` when going to STG

-- We can't use NULL, so supply an arbitrary (untyped!) closure that we
know can be followed by the GC.

Am Mi., 14. Okt. 2020 um 13:46 Uhr schrieb Spiwack, Arnaud <
arnaud.spiw...@tweag.io>:

> I may have misunderstood, but my understanding is the following:
>
> - Since a is a boxed type, it can never be the null pointer
> - So I can use a null pointer unambiguously
>
> Let's call this null-pointer expanded type, `Nullable# a`, it is now a
> different sort than `a`, since it can have the null pointer in it. Is that
> still what you wish for? The risk being that combining such a `Nullable# a`
> with another data structure may very well require an additional boxing,
> which is what, I believe, you were trying to avoid.
>
> /Arnaud
>
> On Tue, Oct 13, 2020 at 11:47 PM David Feuer 
> wrote:
>
>> Null pointers are widely known to be a lousy language feature in general,
>> but there are certain situations where they're *really* useful for compact
>> representation. For example, we define
>>
>> newtype TMVar a = TMVar (TVar (Maybe a))
>>
>> We don't, however, actually use the fact that (Maybe a) is lifted. So we
>> could represent this much more efficiently using something like
>>
>> newtype TMVar a = TMVar (TVar a)
>>
>> where Nothing is represented by a distinguished "null" pointer.
>>
>> While it's possible to implement this sort of thing in user code (with
>> lots of fuss and care), it's not very nice at all. What I'd really like to
>> be able to do is represent certain kinds of sums like this natively.
>>
>> Now that we're getting BoxedRep, I think we can probably make it happen.
>> The trick is to add a special Levity constructor representing sums of
>> particular shapes. Specifically, we can represent a type like this if it is
>> a possibly-nested sum which, when flattened into a single sum, consists of
>> some number of nullary tuples and at most one Lifted or Unlifted type.
>> Then we can have (inline) primops to convert between the BoxedRep and the
>> sum-of-sums representations.
>>
>> Anyone have thoughts on details for what the Levity constructor arguments
>> might look like?
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser depends on DynFlags, depends on Hooks, depends on TcM, DsM, ...

2020-09-10 Thread Sebastian Graf
Hi Sylvain,

Thanks, that answers all my questions! Keep up the great work.

Cheers
Sebastian

Am Do., 10. Sept. 2020 um 17:24 Uhr schrieb Sylvain Henry :

> Hi Sebastian,
>
> Last month I tried to make a DynFlags-free parser. The branch is here:
> https://gitlab.haskell.org/hsyl20/ghc/-/commits/hsyl20/dynflags/parser
> (doesn't build iirc)
>
> 1) The input of the parser is almost DynFlags-free thanks to Alec's patch
> [1]. On that front, we just have to move `mkParserFlags` out of GHC.Parser.
> I would put it alongside other functions generating config datatypes from
> DynFlags in GHC.Driver.Config (added yesterday). It's done in my branch and
> it only required a bit of plumbing to fix `lexTokenStream` iirc.
>
> 2) The output of the parser is the issue, as you point out. The main issue
> is that it uses SDoc/ErrMsg which are dependent on DynFlags.
>
> In the branch I've tried to avoid the use of SDoc by using ADTs to return
> errors and warnings so that the client of the parser would be responsible
> for converting them into SDoc if needed. This is the approach that we would
> like to generalize [2]. The ADT would look like [3] and the pretty-printing
> module like [4]. The idea was that ghc-lib-parser wouldn't integrate the
> pretty-printing module to avoid the dependencies.
>
> I think it's the best interface (for IDEs and other tools) so we just have
> to complete the work :). The branch stalled because I've tried to avoid
> SDoc even in the pretty-printing module and used Doc instead of SDoc but it
> wasn't a good idea... I'll try to resume the work soon.
>
> In the meantime I've been working on making Outputable/SDoc independent of
> DynFlags. If we merge [5] in some form then the last place where we use
> `sdocWithDynFlags` will be in CLabel's Outputable instance (to fix this I
> think we could just depend on the PprStyle (Asm or C) instead of querying
> the backend in the DynFlags). This could be another approach to make the
> parser almost as it is today independent of DynFlags. A side-effect of this
> work is that ghc-lib-parser could include the pretty-printing module too.
>
> So to answer your question:
>
> > Would you say it's reasonable to abstract the definition of `PState`
> over the `DynFlags` type?
>
> We're close to remove the dependence on DynFlags so I would prefer this
> instead of trying to abstract over it.
>
> The roadmap:
>
> 1. Make Outputable/SDoc independent of DynFlags
> 1.1 Remove sdocWithDynFlags used to query the platform (!3972)
> 1.2 Remove sdocWithDynFlags used to query the backend in CLabel's
> Outputable instance
> 1.3 Remove sdocWithDynFlags
> 2. Move mkParserFlags from GHC.Parser to GHC.Driver.Config
> 3. (Make the parser use ADTs to return errors/warnings)
>
> Cheers,
> Sylvain
>
> [1]
> https://gitlab.haskell.org/ghc/ghc/-/commit/469fe6133646df5568c9486de2202124cb734242
> [2]
> https://gitlab.haskell.org/ghc/ghc/-/wikis/Errors-as-(structured)-values
> [3]
> https://gitlab.haskell.org/hsyl20/ghc/-/blob/hsyl20/dynflags/parser/compiler/GHC/Parser/Errors.hs
> [4]
> https://gitlab.haskell.org/hsyl20/ghc/-/blob/hsyl20/dynflags/parser/compiler/GHC/Parser/Errors/Ppr.hs
> [5] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3972
>
>
> On 10/09/2020 15:12, Sebastian Graf wrote:
>
> Hey Sylvain,
>
> In https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3971 I had to
> fight once more with the transitive dependency set of the parser, the
> minimality of which is crucial for ghc-lib-parser
> <https://hackage.haskell.org/package/ghc-lib-parser> and tested by the
> CountParserDeps test.
>
> I discovered that I need to make (parts of) `DsM` abstract, because it is
> transitively imported from the Parser for example through Parser.y ->
> Lexer.x -> DynFlags -> Hooks -> {DsM,TcM}.
> Since you are our mastermind behind the "Tame DynFlags" initiative, I'd
> like to hear your opinion on where progress can be/is made on that front.
>
> I see there is https://gitlab.haskell.org/ghc/ghc/-/issues/10961 and
> https://gitlab.haskell.org/ghc/ghc/-/issues/11301 which ask a related,
> but different question: They want a DynFlags-free interface, but I even
> want a DynFlags-free *module*.
>
> Would you say it's reasonable to abstract the definition of `PState` over
> the `DynFlags` type? I think it's only used for pretty-printing messages,
> which is one of your specialties (the treatment of DynFlags in there, at
> least).
> Anyway, can you think of or perhaps point me to an existing road map on
> that issue?
>
> Thank you!
> Sebastian
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Parser depends on DynFlags, depends on Hooks, depends on TcM, DsM, ...

2020-09-10 Thread Sebastian Graf
Hey Sylvain,

In https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3971 I had to fight
once more with the transitive dependency set of the parser, the minimality
of which is crucial for ghc-lib-parser
 and tested by the
CountParserDeps test.

I discovered that I need to make (parts of) `DsM` abstract, because it is
transitively imported from the Parser for example through Parser.y ->
Lexer.x -> DynFlags -> Hooks -> {DsM,TcM}.
Since you are our mastermind behind the "Tame DynFlags" initiative, I'd
like to hear your opinion on where progress can be/is made on that front.

I see there is https://gitlab.haskell.org/ghc/ghc/-/issues/10961 and
https://gitlab.haskell.org/ghc/ghc/-/issues/11301 which ask a related, but
different question: They want a DynFlags-free interface, but I even want a
DynFlags-free *module*.

Would you say it's reasonable to abstract the definition of `PState` over
the `DynFlags` type? I think it's only used for pretty-printing messages,
which is one of your specialties (the treatment of DynFlags in there, at
least).
Anyway, can you think of or perhaps point me to an existing road map on
that issue?

Thank you!
Sebastian
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: COMPLETE pragmas

2020-09-03 Thread Sebastian Graf
Hi folks,

I implemented what I had in mind in
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3959. CI should turn
green any hour now, so feel free to play with it if you want to.
With the wonderful https://github.com/mpickering/ghc-artefact-nix it will
just be `ghc-head-from 3959`.

Cheers,
Sebastian

Am Di., 1. Sept. 2020 um 22:09 Uhr schrieb Joachim Breitner <
m...@joachim-breitner.de>:

> Am Dienstag, den 01.09.2020, 10:11 +0200 schrieb Sebastian Graf:
> > > 2.) Another scenario that I'd really love to see supported with
> > > COMPLETE pragmas is a way to use | notation with them like you can
> > > with MINIMAL pragmas.
> >
> > (2) is a neat idea, but requires a GHC proposal I'm not currently
> > willing to get into. I can also see a design discussion around
> > allowing arbitrary "formulas" (e.g., not only what is effectively
> > CNF).
> >
> > A big bonus of your design is that it's really easy to integrate into
> > the current implementation, which is what I'd gladly do in case such
> > a proposal would get accepted.
>
> in the original ticket where a COMPLETE pragma was suggested (
> https://gitlab.haskell.org/ghc/ghc/-/issues/8779) the ability to
> specify arbitrary boolean formulas was already present:
>
> “So here is what I think might work well, inspired by the new MINIMAL
> pragma: … The syntax is essentially the same as for MINIMAL, i.e. a
> boolean formula, with constructors and pattern synonyms as atoms. In
> this case”
>
> So one _could_ say that this doesn’t need a proposal, because it would
> just be the implementation finishing the original task ;-)
>
>
> Cheers,
> Joachim
>
> --
> Joachim Breitner
>   m...@joachim-breitner.de
>   http://www.joachim-breitner.de/
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Implicit reboxing of unboxed tuple in let-patterns

2020-09-03 Thread Sebastian Graf
Hi,

Right now, there is one rule: if the type of any variable bound in the
> pattern is unlifted, then the pattern is an unlifter-var pattern and is
> strict.
>

I think the intuition I followed so far was "bindings with unlifted *RHS*
are strict".
So if I take a program in a strict language with Haskell syntax (Idris with
a different syntax, not like -XStrict) and replace all types with their
unlifted counterparts (which should be possible once we have
-XUnliftedDatatypes), then I get exactly the same semantics in GHC Haskell.
I find this property very useful.
As a special case that means that any binders of unlifted type are bound
strictly, if only for uniformity with simple variable bindings. I think my
intuition is different to Richard's rule only for the "unlifted constructor
match with nested lifted-only variable matches" case.

Sebastian

Am Do., 3. Sept. 2020 um 14:48 Uhr schrieb Spiwack, Arnaud <
arnaud.spiw...@tweag.io>:

> This thread is suggesting to add a special case -- one that seems to match
>> intuition, but it's still a special case. And my question is: should the
>> special case be for unboxed tuples? or should the special case be for any
>> pattern whose overall type is unlifted?
>>
>
> My intuition would be: for all unlifted types. I'd submit that the
> distinction between lazy and strict pattern-matching doesn't really make a
> ton of sense for unlifted types. To implement lazy pattern-matching on an
> unlifted type, one has to actually indirect through another type, which I
> find deeply suspicious.
>
> That being said
>
> Right now, there is one rule: if the type of any variable bound in the
>> pattern is unlifted, then the pattern is an unlifter-var pattern and is
>> strict. The pattern must be banged, unless the bound variable is not
>> nested. This rule is consistent across all features.
>>
>
>  I realise that there are a lot of subtil details to get right to specify
> pattern-matching. Or at the very least, that it's difficult to come up with
> a straightforward specification which is as clear as the one above.
>
> I'm wondering though: have there been discussions which led to the above
> rule, or did it just come to be, mostly informally? (and if there have been
> explicit discussions, are they recorded somewhere?)
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: COMPLETE pragmas

2020-09-01 Thread Sebastian Graf
Hi Edward,

I'd expect (1) to work after fixing #14422/#18276.
(3) might have been https://gitlab.haskell.org/ghc/ghc/-/issues/16682, so
it should be fixed nowadays.
(2) is a neat idea, but requires a GHC proposal I'm not currently willing
to get into. I can also see a design discussion around allowing arbitrary
"formulas" (e.g., not only what is effectively CNF).
A big bonus of your design is that it's really easy to integrate into the
current implementation, which is what I'd gladly do in case such a proposal
would get accepted.

Cheers
Sebastian

Am Di., 1. Sept. 2020 um 00:26 Uhr schrieb Edward Kmett :

> I'd be over the moon with happiness if I could hang COMPLETE pragmas on
> polymorphic types.
>
> I have 3 major issues with COMPLETE as it exists.
>
> 1.) Is what is mentioned here:
>
> Examples for me come up when trying to build a completely unboxed 'linear'
> library using backpack. In the end I want/need to supply a pattern synonym
> that works over, say, all the 2d vector types, extracting their elements,
> but right now I just get spammed by incomplete coverage warnings.
>
> type family Elem t :: Type
> class D2 where
>   _V2 :: Iso' t (Elem t, Elem t)
>
> pattern V2 :: D2 t => Elem t -> Elem t -> t
> pattern V2 a b <- (view _V2 -> (a,b)) where
>   V2 a b = review _V2 (a,b)
>
> There is no way to hang a COMPLETE pragma on that.
>
> 2.) Another scenario that I'd really love to see supported with COMPLETE
> pragmas is a way to use | notation with them like you can with MINIMAL
> pragmas.
>
> If you make smart constructors for a dozen constructors in your term type
> (don't judge me!), you wind up needing 2^12 COMPLETE pragmas to describe
> all the ways you might mix regular and start constructors today.
>
> {# COMPLETE (Lam | LAM), (Var | VAR), ... #-}
>
> would let you get away with a single such definition. This comes up when
> you have some kind of monoid that acts on terms and you want to push it
> down through
> the syntax tree invisibly to the user. Explicit substitutions, shifts in
> position in response to source code edits, etc.
>
> 3.) I had one other major usecase where I failed to be able to use a
> COMPLETE pragma:
>
> type Option a = (# a | (##) #)
>
> pattern Some :: a -> Option a
> pattern Some a = (# a | #)
>
> pattern None :: Option a
> pattern None = (# | (##) #)
>
> {-# COMPLETE Some, None #-}
>
> These worked _within_ a module, but was forgotten across module
> boundaries, which forced me to rather drastically change the module
> structure of a package, but it sounds a lot like the issue being discussed.
> No types to hang it on in the interface file. With the ability to define
> unlifted newtypes I guess this last one is less of a concern now?
>
> -Edward
>
> On Mon, Aug 31, 2020 at 2:29 PM Richard Eisenberg 
> wrote:
>
>> Hooray Sebastian!
>>
>> Somehow, I knew cluing you into this conundrum would help find a
>> solution. The approach you describe sounds quite plausible.
>>
>> Yet: types *do* matter, of course. So, I suppose the trick is this: have
>> the COMPLETE sets operate independent of types, but then use types in the
>> PM-checker when determining impossible cases? And, about your idea for
>> having pattern synonyms store pointers to their COMPLETE sets: I think data
>> constructors can also participate. But maybe there is always at least one
>> pattern synonym (which would be a reasonable restriction), so I guess you
>> can look at the pattern-match as a whole and use the pattern synonym to
>> find the relevant COMPLETE set(s).
>>
>> Thanks for taking a look!
>> Richard
>>
>> On Aug 31, 2020, at 4:23 PM, Sebastian Graf  wrote:
>>
>> Hi Richard,
>>
>> Am Mo., 31. Aug. 2020 um 21:30 Uhr schrieb Richard Eisenberg <
>> r...@richarde.dev>:
>>
>>> Hi Sebastian,
>>>
>>> I enjoyed your presentation last week at ICFP!
>>>
>>
>> Thank you :) I'm glad you liked it!
>>
>> This thread (
>>> https://ghc-devs.haskell.narkive.com/NXBBDXg1/suppressing-false-incomplete-pattern-matching-warnings-for-polymorphic-pattern-synonyms)
>>> played out before you became so interested in pattern-match coverage. I'd
>>> be curious for your thoughts there -- do you agree with the conclusions in
>>> the thread?
>>>
>>
>> I vaguely remember reading this thread. As you write there
>> <https://ghc-devs.haskell.narkive.com/NXBBDXg1/suppressing-false-incomplete-pattern-matching-warnings-for-polymorphic-pattern-synonyms#post9>
>>
>> And, while I know it doesn't work today, what's wrong (in theory) with

Re: COMPLETE pragmas

2020-08-31 Thread Sebastian Graf
Hi Richard,

Am Mo., 31. Aug. 2020 um 21:30 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

> Hi Sebastian,
>
> I enjoyed your presentation last week at ICFP!
>

Thank you :) I'm glad you liked it!

This thread (
> https://ghc-devs.haskell.narkive.com/NXBBDXg1/suppressing-false-incomplete-pattern-matching-warnings-for-polymorphic-pattern-synonyms)
> played out before you became so interested in pattern-match coverage. I'd
> be curious for your thoughts there -- do you agree with the conclusions in
> the thread?
>

I vaguely remember reading this thread. As you write there


And, while I know it doesn't work today, what's wrong (in theory) with
>
> {-# COMPLETE LL #-}
>
> No types! (That's a rare thing for me to extol...)
>
> I feel I must be missing something here.
>

Without reading the whole thread, I think that solution is very possible.
The thread goes on to state that we currently attach COMPLETE sets to type
constructors, but that is only an implementational thing. I asked Matt (who
implemented it) somewhere and he said the only reason to attach it to type
constructors was because it was the easiest way to implement serialisation
to interface files.

The thread also mentions that type-directed works better for the
pattern-match checker. In fact I disagree; we have to thin out COMPLETE
sets all the time anyway when new type evidence comes up, for example. It's
quite a hassle to find all the COMPLETE sets of the type constructors a
given type can be "represented" (I mean equality modulo type family
reductions here) as. I'm pretty sure it's broken in multiple ways, as #18276
 points out.

Disregarding a bit of busy work for implementing serialisation to interface
files, it's probably far simpler to give each COMPLETE set a Name/Unique
and refer to them from the pattern synonyms that mention them (we'd have to
get creative for orphans, though). The relation is quite like between a
type class instance and the type in its head. A more worked example is
here: https://gitlab.haskell.org/ghc/ghc/-/issues/18277#note_287827

So, it's on my longer term TODO list to fix this.


> My motivation for asking is https://github.com/conal/linalg/pull/54 (you
> don't need to read the whole thing), which can be boiled down to a request
> for a COMPLETE pragma that works at a polymorphic result type. (Or a
> COMPLETE pragma written in a module that is not the defining module for a
> pattern synonym.) https://gitlab.haskell.org/ghc/ghc/-/issues/14422
> describes a similar, but even more challenging scenario.
>

I'll answer in the thread. (Oh, you also found #14422.) I think the
approach above will also fix #14422.

>
> Do you see any ways forward here?
>
.

>
> Thanks!
> Richard


Maybe I'll give it a try tomorrow.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fusing loops by specializing on functions with SpecConstr?

2020-04-05 Thread Sebastian Graf
>
> That’s it. These two rules alone are enough to eliminate the redundant
> tupling. Now the optimized version of `mapMaybeSF` is beautiful!
>

Beautiful indeed! That's wonderful to hear. Good luck messing about with
your FRP framework!

Sebastian

Am Sa., 4. Apr. 2020 um 03:45 Uhr schrieb Alexis King :

>
> I fiddled with alternative representations for a while and didn’t make
> any progress—it was too easy to end up with code explosion in the
> presence of any unknown calls—but I seem to have found a RULES-based
> approach that works very well on the examples I’ve tried. It’s quite
> simple, which makes it especially appealing!
>
> I started by defining a wrapper around the `SF` constructor to attach
> rules to:
>
> mkSF :: (a -> s -> Step s b) -> s -> SF a b
> mkSF = SF
> {-# INLINE CONLIKE [1] mkSF #-}
>
> I  then changed the definitions of (.), (***), (&&&), (+++), and (&&&)
> to use `mkSF` instead of `SF`, but I left the other methods alone, so
> they just use `SF` directly. Then I defined two rewrite rules:
>
> {-# RULES
> "mkSF @((), _)" forall f s. mkSF f ((), s) =
>   SF (\a s1 -> case f a ((), s1) of Step ((), s2) b -> Step s2 b) s
> "mkSF @(_, ())" forall f s. mkSF f (s, ()) =
>   SF (\a s1 -> case f a (s1, ()) of Step (s2, ()) b -> Step s2 b) s
> #-}
>
> That’s it. These two rules alone are enough to eliminate the redundant
> tupling. Now the optimized version of `mapMaybeSF` is beautiful!
>
> mapMaybeSF = \ @ a @ b f -> case f of { SF @ s f2 s2 ->
>   SF (\ a1 s1 -> case a1 of {
>Nothing -> case s1 of dt { __DEFAULT -> Step dt Nothing }
>Just x -> case f2 x s1 of {
>  Step s2' c1 -> Step s2' (Just c1) }})
>  s2 }
>
> So unless this breaks down in some larger situation I’m not aware of, I
> think this solves my problem without the need for any fancy SpecConstr
> shenanigans. Many thanks to you, Sebastian, for pointing me in the right
> direction!
>
> Alexis
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fusing loops by specializing on functions with SpecConstr?

2020-04-01 Thread Sebastian Graf
>
> Looking at the optimized core, it’s true that the conversion of Maybe to
> Either and back again gets eliminated, which is wonderful! But what’s less
> wonderful is the value passed around through `s`:
>
> mapMaybeSF
>   = \ (@ a) (@ b) (f :: SF a b) ->
>   case f of { SF @ s f2 s2 ->
>   SF
> (\ (a1 :: Maybe a) (ds2 :: ((), ((), (((), (((), (((), s),
> ())), ((), ((), (), ((), ()) ->
>

That is indeed true. But note that as long as you manage to inline
`mapMaybeSF`, the final `runSF` will only allocate once on the "edge" of
each iteration, all intermediate allocations will have been fused away. But
the allocation of these non-sense records seems unfortunate.

Optimisation-wise, I see two problems here:

   1. `mapMaybeSF` is already too huge to inline without INLINE. That is
   because its lambda isn't floated out to the top-level, which is because of
   the existential @s (that shouldn't be a problem), but also its mention of
   f2. The fact that f2 occurs free rather than as an argument makes the
   simplifier specialise `mapMaybeSF` for it, so if it were floated out
   (thereby necessarily lambda-lifted) to top-level, then we'd lose the
   ability to specialise without SpecConstr (which currently only applies to
   recursive functions anyway).
   2. The lambda isn't let-bound (which is probably a consequence of the
   previous point), so it isn't strictness analysed and we have no W/W split.
   If we had, I imagine we would have a worker of type `s -> ...` here. W/W is
   unnecessary if we manage to inline the function anyway, but I'm pretty
   certain we won't inline for larger programs (like `mapMaybeSF` already), in
   which case every failure to inline leaves behind such a residue of records.

So this already seems quite brittle. Maybe a very targeted optimisation
that gets rid of the boring ((), _) wrappers could be worthwhile, given
that a potential caller is never able to construct such a thing themselves.
But that very much hinges on being able to prove that in fact every such
((), _) constructed in the function itself terminates.

There are a few ways I can think of in which we as the programmer could
have been smarter, though:


   - Simply by specialising `SF` for the `()` case:

   data SF a b where
 SFState :: !(a -> s -> Step s b) -> !s -> SF a b
 SFNoState :: !(a -> Step () b) -> SF a b

   And then implementing every action 2^n times, where n is the number of
   `SF` arguments. That undoubtly leads to even more code bloat.
   - An alternative that I'm a little uncertain would play out would be

   data SMaybe a = SNothing | SJust !a
   data SF a b where
 SF :: !(SMaybe (s :~: ()) ->  !(a -> s -> Step s b) -> !s -> SF a b

   and try match on the proof everywhere needed to justify e.g. in `(.)`
   only storing e.g. s1 instead of (s1, s2). Basically do some type algebra in
   the implementation.
   - An even simpler thing would be to somehow use `Void#` (which should
   have been named `Unit#`), but I think that doesn't work due to runtime rep
   polymorphism restrictions.

I think there is lots that can be done to tune this idea.

Am Mi., 1. Apr. 2020 um 01:16 Uhr schrieb Alexis King :

> > On Mar 31, 2020, at 17:05, Sebastian Graf  wrote:
> >
> > Yeah, SPEC is quite unreliable, because IIRC at some point it's either
> consumed or irrelevant. But none of the combinators you mentioned should
> rely on SpecConstr! They are all non-recursive, so the Simplifier will take
> care of "specialisation". And it works just fine, I just tried it
>
> Ah! You are right, I did not read carefully enough and misinterpreted.
> That approach is clever, indeed. I had tried something similar with a CPS
> encoding, but the piece I was missing was using the existential to tie the
> final knot.
>
> I have tried it out on some of my experiments. It’s definitely a
> significant improvement, but it isn’t perfect. Here’s a small example:
>
> mapMaybeSF :: SF a b -> SF (Maybe a) (Maybe b)
> mapMaybeSF f = proc v -> case v of
>   Just x -> do
> y <- f -< x
> returnA -< Just y
>   Nothing -> returnA -< Nothing
>
> Looking at the optimized core, it’s true that the conversion of Maybe to
> Either and back again gets eliminated, which is wonderful! But what’s less
> wonderful is the value passed around through `s`:
>
> mapMaybeSF
>   = \ (@ a) (@ b) (f :: SF a b) ->
>   case f of { SF @ s f2 s2 ->
>   SF
> (\ (a1 :: Maybe a) (ds2 :: ((), ((), (((), (((), (((), s),
> ())), ((), ((), (), ((), ()) ->
>
> Yikes! GHC has no obvious way to clean this type up, so it will just grow
> indefinitely, and we end up doing a dozen pattern-matches in th

Re: Fusing loops by specializing on functions with SpecConstr?

2020-03-31 Thread Sebastian Graf
e
>   idea: do one layer of unrolling by hand, perhaps even in FRP source 
> code:
>
> add1rec = SF (\a -> let !b = a+1 in (b,add1rec))
> add1 = SF (\a -> let !b = a+1 in (b,add1rec))
>
>
> Yes, I was playing with the idea at one point of some kind of RULE that
> inserts GHC.Magic.inline on the specialized RHS. That way the programmer
> could ask for the unrolling explicitly, as otherwise it seems unreasonable
> to ask the compiler to figure it out.
>
> On Mar 31, 2020, at 08:08, Sebastian Graf  wrote:
>
> We can formulate SF as a classic Stream that needs an `a` to produce its
> next element of type `b` like this (SF2 below)
>
>
> This is a neat trick, though I’ve had trouble getting it to work reliably
> in my experiments (even though I was using GHC.Types.SPEC). That said, I
> also feel like I don’t understand the subtleties of SpecConstr very well,
> so it could have been my fault.
>
> The more fundamental problem I’ve found with that approach is that it
> doesn’t do very well for arrow combinators like (***) and (|||), which come
> up very often in arrow programs but rarely in streaming. Fusing long chains
> of first/second/left/right is actually pretty easy with ordinary RULEs, but
> (***) and (|||) are much harder, since they have multiple continuations.
>
> It seems at first appealing to rewrite `f *** g` into `first f >>> second
> g`, which solves the immediate problem, but this is actually a lot less
> efficient after repeated rewritings. You end up rewriting `(f ||| g) *** h`
> into `first (left f) >>> first (right g) >>> second h`, turning two
> distinct branches into four, and larger programs have much worse
> exponential blowups.
>
> So that’s where I’ve gotten stuck! I’ve been toying with the idea of
> thinking about expression “shells”, so if you have something like
>
> first (a ||| b) >>> c *** second (d ||| e) >>> f
>
> then you have a “shell” of the shape
>
> first (● ||| ●) >>> ● *** second (● ||| ●) >>> ●
>
> which theoretically serves as a key for the specialization. You can then
> generate a specialization and a rule:
>
> $s a b c d e f = ...
> {-# RULE forall a b c d e f.
> first (a ||| b) >>> c *** second (d ||| e) >>> f = $s a b c d
> e f #-}
>
> The question then becomes: how do you specify what these shells are, and
> how do you specify how to transform the shell into a specialized function?
> I don’t know, but it’s something a Core plugin could theoretically do.
> Maybe it makes sense for this domain-specific optimization to be a Core
> pass that runs before the simplifier, like the typeclass specializer
> currently is, but I haven’t explored that yet.
>
> Alexis
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fusing loops by specializing on functions with SpecConstr?

2020-03-31 Thread Sebastian Graf
We can formulate SF as a classic Stream that needs an `a` to produce its
next element of type `b` like this (SF2 below):

{-# LANGUAGE BangPatterns #-}
{-# LANGUAGE GADTs #-}

module Lib where

newtype SF a b = SF { runSF :: a -> (b, SF a b) }

inc1 :: SF Int Int
inc1 = SF $ \a -> let !b = a+1 in (b, inc1)

data Step s a = Yield !s a

data SF2 a b where
  SF2 :: !(a -> s -> Step s b) -> !s -> SF2 a b

inc2 :: SF2 Int Int
inc2 = SF2 go ()
  where
go a _ = let !b = a+1 in Yield () b

runSF2 :: SF2 a b -> a -> (b, SF2 a b)
runSF2 (SF2 f s) a = case f a s of
  Yield s' b -> (b, (SF2 f s'))

Note the absence of recursion in inc2. This resolves the tension around
having to specialise for a function argument that is recursive and having
to do the unrolling. I bet that similar to stream fusion, we can arrange
that only the consumer has to be explicitly recursive. Indeed, I think this
will help you inline mapping combinators such as `second`, because it won't
be recursive itself anymore.
Now we "only" have to solve the same problems as with good old stream
fusion.

The tricky case (after realising that we need to add `Skip` to `Step` for
`filterSF2`) is when we want to optimise a signal of signals, e.g.
something like `concatMapSF2 :: (b -> SF2 a c) -> SF2 a b -> SF2 a c` or
some such. And here we are again in #855/#915.



Also if you need convincing that we can embed any SF into SF2, look at this:

embed :: SF Int Int -> SF2 Int Int
embed origSF = SF2 go origSF
  where
go a sf = case runSF sf a of
  (b, sf') -> Yield sf' b

Please do open a ticket about this, though. It's an interesting data point!

Cheers,
Sebastian


Am Di., 31. März 2020 um 13:12 Uhr schrieb Simon Peyton Jones <
simo...@microsoft.com>:

> Wow – tricky stuff!   I would never have thought of trying to optimise
> that program, but it’s fascinating that you get lots and lots of them from
> FRP.
>
>
>
>- Don’t lose this thread!  Make a ticket, or a wiki page. If the
>former, put the main payload (including Alexis’s examples) into the
>Descriptions, not deep in the discussion.
>- I wonder whether it’d be possible to adjust the FRP library to
>generate easier-to-optimise code. Probably not, but worth asking.
>- Alexis’s proposed solution relies on
>   - Specialising on a function argument.  Clearly this must be
>   possible, and it’d be very beneficial.
>   - Unrolling one layer of a recursive function.  That seems harder:
>   how we know to **stop** unrolling as we successively simplify?  One
>   idea: do one layer of unrolling by hand, perhaps even in FRP source 
> code:
>
> add1rec = SF (\a -> let !b = a+1 in (b,add1rec))
>
> add1 = SF (\a -> let !b = a+1 in (b,add1rec))
>
>
>
> Simon
>
>
>
> *From:* ghc-devs  *On Behalf Of *Sebastian
> Graf
> *Sent:* 29 March 2020 15:34
> *To:* Alexis King 
> *Cc:* ghc-devs 
> *Subject:* Re: Fusing loops by specializing on functions with SpecConstr?
>
>
>
> Hi Alexis,
>
>
>
> I've been wondering the same things and have worked on it on and off. See
> my progress in https://gitlab.haskell.org/ghc/ghc/issues/855#note_149482
> <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.haskell.org%2Fghc%2Fghc%2Fissues%2F855%23note_149482=02%7C01%7Csimonpj%40microsoft.com%7Cab7afece6b43485f5e5508d7d3ee4cfc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637210892758857668=BWptTEUj%2BcKu1cEkYiFtDuBRHKKzl%2BkVxUzV%2FRIje1c%3D=0>
> and https://gitlab.haskell.org/ghc/ghc/issues/915#note_241520
> <https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.haskell.org%2Fghc%2Fghc%2Fissues%2F915%23note_241520=02%7C01%7Csimonpj%40microsoft.com%7Cab7afece6b43485f5e5508d7d3ee4cfc%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637210892758867663=w5cJDvwz0e1RWq3c%2BrG12McHTt9H%2FkMzRnUlyAS22bM%3D=0>
> .
>
>
>
> The big problem with solving the higher-order specialisation problem
> through SpecConstr (which is what I did in my reports in #855) is indeed
> that it's hard to
>
>1. Anticipate what the rewritten program looks like without doing a
>Simplifier pass after each specialisation, so that we can see and exploit
>new specialisation opportunities. SpecConstr does use the simple Core
>optimiser but, that often is not enough IIRC (think of ArgOccs from
>recursive calls). In particular, it will not do RULE rewrites. Interleaving
>SpecConstr with the Simplifier, apart from nigh impossible conceptually, is
>computationally intractable and would quickly drift off into Partial
>Evaluation swamp.
>2. Make the RULE engine match and rewrite call sites in all call
>patterns they can apply.
>I.e., `f (\x -> Just (x +1))` calls its argument

Re: DataCon tag value convention

2020-02-12 Thread Sebastian Graf
You probably couldn't do pointer tagging

anymore, which is probably a substantial performance loss.

Am Mi., 12. Feb. 2020 um 19:58 Uhr schrieb Csaba Hruska <
csaba.hru...@gmail.com>:

> Hello,
>
> In theory could GHC codegen work if every data constructor in the whole
> program  have a globally unique tag value instead of starting from 1 for
> each algebraic data type?
> Would this break any GHC design decision?
>
> Regards,
> Csaba
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Residency profiles

2020-01-23 Thread Sebastian Graf
This recently came up again. It seems that `+RTS -h -i0` will just turn
every minor collection into a major one:
https://gitlab.haskell.org/ghc/ghc/issues/17387#note_248705
`-i0` seems significantly different from `-i0.001`, say, in that it just
turns minor GCs into major ones and doesn't introduce non-determinism
otherwise. Sampling rate can be controlled with `-A`, much like `-F1` (but
it's still faster for some reason).

Am Mo., 10. Dez. 2018 um 09:11 Uhr schrieb Simon Marlow :

> https://phabricator.haskell.org/D5428
>
>
> On Sun, 9 Dec 2018 at 10:12, Sebastian Graf  wrote:
>
>> Ah, I was only looking at `+RTS --help`, not the users guide. Silly me.
>>
>> Am Do., 6. Dez. 2018 um 20:53 Uhr schrieb Simon Marlow <
>> marlo...@gmail.com>:
>>
>>> It is documented!
>>> https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/runtime_control.html#rts-flag--F%20%E2%9F%A8factor%E2%9F%A9
>>>
>>> On Thu, 6 Dec 2018 at 16:21, Sebastian Graf  wrote:
>>>
>>>> Hey,
>>>>
>>>> thanks, all! Measuring with `-A1M -F1` delivers much more reliable
>>>> residency numbers.
>>>> `-F` doesn't seem to be documented. From reading `rts/RtsFlags.c` and
>>>> `rts/sm/GC.c` I gather that it's the factor by which to multiply the number
>>>> of live bytes by to get the new old gen size?
>>>> So effectively, the old gen will 'overflow' on every minor GC, neat!
>>>>
>>>> Greetings
>>>> Sebastian
>>>>
>>>> Am Do., 6. Dez. 2018 um 12:52 Uhr schrieb Simon Peyton Jones via
>>>> ghc-devs :
>>>>
>>>>> |  Right. A parameter for fixing the nursery size would be easy to
>>>>> implement,
>>>>> |  I think. Just a new flag, then in GC.c:resize_nursery() use the
>>>>> flag as the
>>>>> |  nursery size.
>>>>>
>>>>> Super!  That would be v useful.
>>>>>
>>>>> |  "Max. residency" is really hard to measure (need to do very
>>>>> frequent GCs),
>>>>> |  perhaps a better question to ask is "residency when the program is
>>>>> in state
>>>>> |  S".
>>>>>
>>>>> Actually, Sebastian simply wants to see an accurate, reproducible
>>>>> residency profile, and doing frequent GCs might well be an acceptable
>>>>> cost.
>>>>>
>>>>> Simon
>>>>> ___
>>>>> ghc-devs mailing list
>>>>> ghc-devs@haskell.org
>>>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>>>
>>>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: is Unlifted Type == Primitive Type?

2020-01-20 Thread Sebastian Graf
Hi Csaba,

Yes, boxed unlifted ADTs are supported by code-gen, or at least the fix for
codegen to deal with it is [rather simple](
https://gitlab.haskell.org/ghc/ghc/commit/fc4e2a03ebb40e2268ec0deb9833ec82bd2d7bee
).

Hope that helps.
Sebastian

Am Mo., 20. Jan. 2020 um 11:13 Uhr schrieb Csaba Hruska <
csaba.hru...@gmail.com>:

> I'm also interested if Boxed Unlifted non Primitive types are supported by
> the codegen?
> Sorry, but I'm not confident enough in the topic to update the wiki.
>
>
> On Mon, Jan 20, 2020 at 10:58 AM Richard Eisenberg 
> wrote:
>
>> The recent addition of -XUnliftedNewtypes means that user-defined
>> newtypes (
>> https://github.com/ghc-proposals/ghc-proposals/blob/master/proposals/0098-unlifted-newtypes.rst)
>> can indeed be unlifted and unboxed. There is also a proposal for more
>> general unlifted data (
>> https://github.com/ghc-proposals/ghc-proposals/pull/265).
>>
>> If the wiki is out of date, do you think you could update it?
>>
>> Thanks!
>> Richard
>>
>> On Jan 20, 2020, at 9:45 AM, Csaba Hruska  wrote:
>>
>> Hello,
>>
>> According to GHC Wiki
>> 
>> it seems that only primitive types can be unlifted.
>> Is this true in general? (i.e. no user type can be unlifted)
>> 
>> Does the Stg to Cmm codegen support compilation for a variable of user
>> defined ADT as unlifted?
>> i.e. some analysis proved that it is always a constructor and never a
>> thunk.
>>
>> Thanks,
>> Csaba
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Handling source locations in HsSyn via TTG

2019-10-30 Thread Sebastian Graf
> I would like to submit a solution E, which is just a variant of D (or a
meshing together of D and B), but may have different pros and cons.

I like this conceptually: No `WrapL`/`WrapX`/`XWrap`/`XRec` (the trouble to
find a fitting name already suggests that it's maybe a little too general a
concept), but rather a nicely targeted `XLoc`.

But I'm afraid that the unconditional ping-pong incurs an indirection even
if you instantiate `XLoc` to `()` all the time. I think that's a no-go,
according to the wiki page: Think of TH, which would pay needlessly.
Also it's not much better than just the old ping-pong style, where we could
always pass `noSrcLoc` instead of `()` for basically the same heap layout.

Am Mi., 30. Okt. 2019 um 14:06 Uhr schrieb Spiwack, Arnaud <
arnaud.spiw...@tweag.io>:

> I would like to submit a solution E, which is just a variant of D (or a
> meshing together of D and B), but may have different pros and cons.
>
> Keep the ping-pong style. But! Let Located take the pass as an argument.
> Now, Located would be
>
> data Located p a = L (XLoc p) a
>
> We could define XLoc p to be a source span in GHC passes, and () in
> Template Haskell.
> --
>
> I’m not arguing in favour of E, at this point: just submitting an
> alternative. I don’t like A though: I’m assuming that if we are free to add
> the L constructor or not, it will be forgotten often. This is the sort of
> mistake which will be hard to catch before releases. It sounds like
> unnecessary risk.
>
> PS: Maybe a friendlier version of Located, in this solution E, could be
>
> data Located a p = L (XLoc p) (a p)
>
>
> On Mon, Oct 28, 2019 at 11:46 AM Richard Eisenberg 
> wrote:
>
>> A nice property of Solution D (and I think this is a new observation) is
>> that we could accommodate both located source and unlocated. For example:
>>
>> > data GhcPass (c :: Pass)   -- as today
>> > data Pass = MkPass Phase HasLoc
>> > data Phase = Parsed | Renamed | Typechecked
>> > data HasLoc = YesLoc | NoLoc
>> >
>> > type instance WrapL (GhcPass (MkPass p YesLoc)) f = Located (f (GhcPass
>> (MkPass p YesLoc)))
>> > type instance WrapL (GhcPass (MkPass p NoLoc)) f = f (GhcPass (MkPass p
>> NoLoc))
>>
>> I don't actually think this is a good idea, as it would mean making a
>> bunch of functions polymorphic in the HasLoc parameter, which is
>> syntactically annoying. But the existence of this approach suggests that
>> the design scales.
>>
>> Regardless of what you think of this new idea, I'm in favor of Solution
>> D. I like my types to stop me from making errors, and I'm willing to put up
>> with the odd type error asking me to write `unLoc` as I work in order to
>> avoid errors.
>>
>> Richard
>>
>> > On Oct 28, 2019, at 10:30 AM, Vladislav Zavialov 
>> wrote:
>> >
>> >> Are you arguing for Solution D?  Or are you proposing some new
>> solution E?  I can't tell.
>> >
>> > I suspect that I’m arguing for Solution B, but it’s hard to tell
>> because it’s not described in enough detail in the Wiki.
>> >
>> >> Easy
>> >>
>> >>  type instance WrapL ToolPass t = ...
>> >>
>> >> What am I missing?
>> >
>> >
>> > This assumes that `WrapL` is an open type family. In this case, there’s
>> no problem. The merge request description has the following definition of
>> WrapL:
>> >
>> > type instance WrapL p (f :: * -> *) where
>> >  WrapL (GhcPass p) f = Located (f (GhcPass p))
>> >  WrapL p   f =  f p
>> > type LPat p = WrapL p Pat
>> >
>> > That wouldn’t be extensible. However, if WrapL is open, then Solution D
>> sounds good to me.
>> >
>> > - Vlad
>> >
>> >> On 28 Oct 2019, at 13:20, Simon Peyton Jones 
>> wrote:
>> >>
>> >> Vlad
>> >>
>> >> Are you arguing for Solution D?  Or are you proposing some new
>> solution E?  I can't tell.
>> >>
>> >>
>> >> | As to merge request !1970, it isn’t good to special-case GhcPass in a
>> >> | closed type family, making other tools second-class citizens. Let’s
>> say I
>> >> | have `MyToolPass`, how would I write an instance of `WrapL` for it?
>> >>
>> >> Easy
>> >>
>> >>  type instance WrapL ToolPass t = ...
>> >>
>> >> What am I missing?
>> >>
>> >> Simon
>> >>
>> >> | -Original Message-
>> >> | From: Vladislav Zavialov 
>> >> | Sent: 28 October 2019 10:07
>> >> | To: Simon Peyton Jones 
>> >> | Cc: ghc-devs@haskell.org
>> >> | Subject: Re: Handling source locations in HsSyn via TTG
>> >> |
>> >> | I care about this, and I maintain my viewpoint described in
>> >> |
>> https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail.has
>> >> | kell.org%2Fpipermail%2Fghc-devs%2F2019-
>> >> | February%2F017080.htmldata=02%7C01%7Csimonpj%40microsoft.com
>> %7C06c859
>> >> |
>> d8bc0f48c73cb208d75b8e895c%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C63
>> >> |
>> 7078540055975603sdata=FhkqfWGXaNX%2Fz4IiCcvoVCyzVsSAlyz6Y1dxEGUjX9I%3
>> >> | Dreserved=0
>> >> |
>> >> | I’m willing to implement this.
>> >> |
>> >> | As to merge request !1970, it isn’t good to 

Re: Simplifier bug fixed in GHC 8.8.1?

2019-10-28 Thread Sebastian Graf
Hi Alexis,

I think the fact that it looks like it's fixed is only a coincidence. See
https://gitlab.haskell.org/ghc/ghc/issues/17409, where I go into a bit more
detail.

Cheers
Sebastian

Am Mo., 28. Okt. 2019 um 07:16 Uhr schrieb Alexis King <
lexi.lam...@gmail.com>:

> Hi all,
>
> I have an odd question: I’ve bumped into a clear simplifier bug, and
> although it only happens on GHC 8.6.5, not 8.8.1, I’d like to locate the
> change that fixed it. My library’s test suite currently fails on GHC 8.6.5
> due to the bug, and I’d rather not force all my users to upgrade to 8.8 if
> I can help it, so I’m hoping to find a workaround.
>
> The minimal test case I’ve found for the bug is this program:
>
> {-# LANGUAGE GeneralizedNewtypeDeriving, StandaloneDeriving,
> TypeFamilies #-}
>
> import Control.Exception
> import Control.Monad.IO.Class
> import Control.Monad.Trans.Identity
> import Control.Monad.Trans.Reader
>
> class Monad m => MonadFoo m where
>   foo :: m a -> m a
> instance MonadFoo IO where
>   foo m = onException m (pure ())
> instance MonadFoo m => MonadFoo (ReaderT r m) where
>   foo m = ReaderT $ \r -> foo (runReaderT m r)
> deriving instance MonadFoo m => MonadFoo (IdentityT m)
>
> type family F m where
>   F m = IdentityT m
>
> newtype FT m a = FT { runFT :: F m a }
>   deriving (Functor, Applicative, Monad, MonadIO, MonadFoo)
>
> main :: IO ()
> main = run (foo (liftIO (throwIO (IndexOutOfBounds "bang"
>   where
> run :: ReaderT () (FT (ReaderT () IO)) a -> IO a
> run = flip runReaderT () . runIdentityT . runFT . flip runReaderT
> ()
>
> Using GHC 8.6.5 on macOS 10.14.5, compiling this program with
> optimizations reliably triggers the -fcatch-bottoms sanitization:
>
> $ ghc -O -fcatch-bottoms weird.hs && ./weird
> [1 of 1] Compiling Main ( weird.hs, weird.o )
> Linking weird ...
> weird: Bottoming expression returned
>
> What goes wrong? Somehow the generated core for this program includes the
> following:
>
> lvl_s47B :: SomeException
> lvl_s47B = $fExceptionArrayException_$ctoException lvl_s483
>
> m_s47r :: () -> State# RealWorld -> (# State# RealWorld, () #)
> m_s47r
>   = \ _ (eta_B1 :: State# RealWorld) -> raiseIO# lvl_s47B eta_B1
>
> main_s2Ww :: State# RealWorld -> (# State# RealWorld, () #)
> main_s2Ww
>   = \ (eta_a2wK :: State# RealWorld) ->
>   catch# (case m_s47r `cast`  of { }) raiseIO# eta_a2wK
>
> This core is completely bogus: it assumes that m_s47r is bottom, but
> m_s47r is a top-level function! The program still passes -dcore-lint,
> unfortunately, as it is still well-typed. (Also, in case it helps:
> -ddump-simplifier-iterations shows that the buggy transformation occurs in
> the first iteration of the very first simplifier pass.)
>
> I’ve been trying to figure out what change might have fixed this so that I
> can assess if it’s possible to work around, but I haven’t found anything
> obvious. I’ve been slowly `git bisect`ing to look for the commit that
> introduced the fix, but many of the commits I’ve tested cause unrelated
> panics on my machine, which has been exacerbating the problem of the slow
> recompilation times. I’m a little at wits’ end, but opening a bug report
> hasn’t felt right, since the bug does appear to already be fixed.
>
> Does this issue ring any bells to anyone on this list? Is there a
> particular patch that landed between GHC 8.6.5 and GHC 8.8.1 that might
> have fixed this problem? If not, I’ll keep trying with `git bisect`, but
> I’d appreciate any pointers.
>
> Thanks,
> Alexis
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Urgent: git problem

2019-10-23 Thread Sebastian Graf
Hi,

Some googling turned up this SO thread
https://stackoverflow.com/a/43253320/388010
Does that help?

Cheers
Sebastian

Am Mi., 23. Okt. 2019 um 17:22 Uhr schrieb Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org>:

> Aieee!   All my GHC repos are failing with this.  As a result I can’t
> pull.  What should I do?   Thanks!
>
> Simon
>
>
>
> git pull
>
> error: cannot lock ref 'refs/remotes/origin/wip/rae/remove-tc-dep': 
> 'refs/remotes/origin/wip/rae'
> exists; cannot create 'refs/remotes/origin/wip/rae/remove-tc-dep'
>
> From gitlab.haskell.org:ghc/ghc
>
> ! [new branch]wip/rae/remove-tc-dep->
> origin/wip/rae/remove-tc-dep  (unable to update local ref)
>
> error: cannot lock ref 'refs/remotes/origin/wip/rae/split-up-modules':
> 'refs/remotes/origin/wip/rae' exists; cannot create
> 'refs/remotes/origin/wip/rae/split-up-modules'
>
> ! [new branch]wip/rae/split-up-modules ->
> origin/wip/rae/split-up-modules  (unable to update local ref)
>
> simonpj@MSRC-3645512:~/code/HEAD-2$
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How to navigate around the source tree?

2019-10-23 Thread Sebastian Graf
FWIW, I'm using VSCode's fuzzy file search with Ctrl+P (and vim's
equivalent) rather successfully. Just tried it for Hs/Utils.hs by typing
'hsutils.hs'. It didn't turn up as the first result in VSCode, but it in
vim.

Am Mi., 23. Okt. 2019 um 14:27 Uhr schrieb Matthew Pickering <
matthewtpicker...@gmail.com>:

> I use `fast-tags` which doesn't look at the hierarchy at all and I'm
> not sure what the improvement would be as the names of the modules
> would still clash.
>
> If there is some other recommended way to jump to a module then that
> would also work for me.
>
> Matt
>
>
> On Wed, Oct 23, 2019 at 12:08 PM Sylvain Henry  wrote:
> >
> > Hi,
> >
> > How do you generate your tags file? It seems to be a shortcoming of the
> > generator to not take into account the location of the definition file.
> >
> >  > Perhaps `HsUtils` and `StgUtils` would be appropriate to
> > disambiguate`Hs/Utils` and `StgToCmm/Utils`.
> >
> > We are promoting the module prefixes (`Hs`, `Stg`, `Tc`, etc.) into
> > proper module layers (e.g. `HsUtils` becomes `GHC.Hs.Utils`) so it would
> > be redundant to add the prefixes back. :/
> >
> > Cheers,
> > Sylvain
> >
> > On 23/10/2019 12:52, Matthew Pickering wrote:
> > > Hi,
> > >
> > > The module rework has broken my workflow.
> > >
> > > Now my tags file is useless for jumping for modules as there are
> > > multiple "Utils" and "Types" modules. Invariable I am jumping to the
> > > wrong one. What do other people do to avoid this?
> > >
> > > Can we either revert these changes or give these modules unique names
> > > to facilitate that only reliable way of navigating the code base.
> > > Perhaps `HsUtils` and `StgUtils` would be appropriate to disambiguate
> > > `Hs/Utils` and `StgToCmm/Utils`.
> > >
> > > Cheers,
> > >
> > > Matt
> > > ___
> > > ghc-devs mailing list
> > > ghc-devs@haskell.org
> > > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Should coercion binders (arguments or binders in patterns) be TyVars?

2019-10-06 Thread Sebastian Graf
Hi Ömer,

I'm not sure if there's a case in GHC (yet, because newtype coercions are
zero-cost), but coercions in general (as introduced for example in Types
and Programming Languages) can carry computational content and thus can't
be erased.

Think of a hypothetical coercion `co :: Int ~ Double`; applying that
coercion as in `x |> co` to `x :: Int` would need to `fild` (load the
integer in a floating point register) at run-time, so you can't erase it.
The fact that we can for newtypes is because `coerce` is basically just the
`id` function at runtime.

Cheers,
Sebastian

Am So., 6. Okt. 2019 um 10:28 Uhr schrieb Ömer Sinan Ağacan <
omeraga...@gmail.com>:

> Hi,
>
> I just realized that coercion binders are currently Ids and not TyVars
> (unlike
> other type arguments). This means that we don't drop coercion binders in
> CoreToStg. Example:
>
> {-# LANGUAGE ScopedTypeVariables, TypeOperators, PolyKinds, GADTs,
>TypeApplications, MagicHash #-}
>
> module UnsafeCoerce where
>
> import Data.Type.Equality ((:~:)(..))
> import GHC.Prim
> import GHC.Types
>
> unsafeEqualityProof :: forall k (a :: k) (b :: k) . a :~: b
> unsafeEqualityProof = error "unsafeEqualityProof evaluated"
>
> unsafeCoerce :: forall a b . a -> b
> unsafeCoerce x = case unsafeEqualityProof @_ @a @b of Refl -> x
>
> If I build this with -ddump-stg this is what I get for `unsafeCoerce`:
>
> UnsafeCoerce.unsafeCoerce :: forall a b. a -> b
> [GblId, Arity=1, Unf=OtherCon []] =
> [] \r [x_s2jn]
> case UnsafeCoerce.unsafeEqualityProof of {
>   Data.Type.Equality.Refl co_a2fd -> x_s2jn;
> };
>
> See the binder in `Refl` pattern.
>
> Unarise drops this binder because it's a "void" argument (doesn't have a
> runtime
> representation), but still it's a bit weird that we drop types but not
> coercions
> in CoreToStg.
>
> Is this intentional?
>
> Ömer
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Changes in GHC API modularity with the "Encode shape information in PMOracle" MR

2019-09-17 Thread Sebastian Graf
Shayne,

out of curiosity, could you find out which of the three modules DsMonad,
FamInst and TcSimplify lead to the blowup? If it's not too much of a
hassle, that is.
These are the only imports of PmOracle that aren't already exported from
ghc-lib-parser.

Cheers,
Sebastian

Am Mo., 16. Sept. 2019 um 22:38 Uhr schrieb Shayne Fletcher <
shayne.fletc...@daml.com>:

> Hi Sebastian,
>
> On Mon, Sep 16, 2019 at 5:23 PM Sebastian Graf 
> wrote:
>
>> Hi Shayne,
>>
>> Sorry to hear that! We didn't consider modularity at all and I would be
>> happy to try to refactor in a way that would allow `ghc-lib-parser` to be
>> properly separated again.
>> I'm fairly certain that I didn't directly touch anything parser related,
>> but apparently the new cyclic import of PmOracle within TcRnTypes (which is
>> also exposed from `ghc-lib-parser`) pulled in the other half of GHC.
>> I'll see if I can fix that tomorrow, if only by extracting a separate
>> `Types`-style module.
>>
>>
> That sounds awesome. Tremendous. Thank-you! Please feel free to reach out
> to me if there's anything I can do to help your analysis[*]!
>
> [*] For the record, the procedure for calculating the `ghc-lib-parser`
> modules is a little complicated by there needing to be some generated
> equivalents of `.hsc` files present for this to work but the procedure is
> at the end of the day just `ghc -M` invoked over `Parser.hs`.
>
>
>> Cheers,
>> Sebastian
>>
>
> Fingers crossed and all the best!
>
> --
> *Shayne Fletcher*
> Language Engineer */* +1 917 699 7663
> *Digital Asset* <https://digitalasset.com/>, creators of *DAML
> <https://daml.com/>*
>
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.digitalasset.com/emaildisclaimer.html. If you are not the
> intended recipient, please delete this message.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Changes in GHC API modularity with the "Encode shape information in PMOracle" MR

2019-09-17 Thread Sebastian Graf
Hmm. The issue of build parallelism is kind-of orthogonal to .hs-boot
files/circular imports.

The impact of .hs-boot files on build parallelism depends on their
announced imports; at least in my semantic model of them (which might be
incorrect) they just break cycles.
The degree of parallelism is visible as a property of this DAG where cycles
are broken through insertion of .hs-boot nodes.

The issue here is one of transitive dependencies, which is a property on
the original cyclic dependency graph.

So while we have the dependency graph

HsExpr <--> TcRnTypes <--> PmOracle

This is of no issue for build parallelism, because the .hs-boot files
remove "feedback vertices" from the graph, rendering it acyclic:

TcRnTypes.hs-boot <- HsExpr <- TcRnTypes <- PmOracle
PmOracle.hs-boot <- TcRnTypes

Yet a package including HsExpr must also include PmOracle to compile.

The parallelism issue is best tackled by hadrian based profiles, on which
we could compute try to come up with the optimal schedule and compare it
across multiple CI runs.
But the dependency issue here is rather specific to the set of modules that
should be included in ghc-lib-parser, I'm afraid, and I don't see how to
solve it in a general way other than pre-computing the set of transitive
dependencies for specific blessed modules. Which might be a thing we
want... One set for each major entry point to the compiler pipeline, for
example.

Am Di., 17. Sept. 2019 um 10:19 Uhr schrieb Matthew Pickering <
matthewtpicker...@gmail.com>:

> This is precisely the problem I was worried about in April.
>
> https://mail.haskell.org/pipermail/ghc-devs/2019-April/017493.html
>
> Any fix should ensure adding a test to make sure it doesn't happen again.
>
> Cheers,
>
> Matt
>
> On Mon, Sep 16, 2019 at 10:38 PM Shayne Fletcher via ghc-devs
>  wrote:
> >
> > Hi Sebastian,
> >
> > On Mon, Sep 16, 2019 at 5:23 PM Sebastian Graf 
> wrote:
> >>
> >> Hi Shayne,
> >>
> >> Sorry to hear that! We didn't consider modularity at all and I would be
> happy to try to refactor in a way that would allow `ghc-lib-parser` to be
> properly separated again.
> >> I'm fairly certain that I didn't directly touch anything parser
> related, but apparently the new cyclic import of PmOracle within TcRnTypes
> (which is also exposed from `ghc-lib-parser`) pulled in the other half of
> GHC.
> >> I'll see if I can fix that tomorrow, if only by extracting a separate
> `Types`-style module.
> >>
> >
> > That sounds awesome. Tremendous. Thank-you! Please feel free to reach
> out to me if there's anything I can do to help your analysis[*]!
> >
> > [*] For the record, the procedure for calculating the `ghc-lib-parser`
> modules is a little complicated by there needing to be some generated
> equivalents of `.hsc` files present for this to work but the procedure is
> at the end of the day just `ghc -M` invoked over `Parser.hs`.
> >
> >>
> >> Cheers,
> >> Sebastian
> >
> >
> > Fingers crossed and all the best!
> >
> > --
> > Shayne Fletcher
> > Language Engineer / +1 917 699 7663
> > Digital Asset, creators of DAML
> >
> > This message, and any attachments, is for the intended recipient(s)
> only, may contain information that is privileged, confidential and/or
> proprietary and subject to important terms and conditions available at
> http://www.digitalasset.com/emaildisclaimer.html. If you are not the
> intended recipient, please delete this
> message.___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Changes in GHC API modularity with the "Encode shape information in PMOracle" MR

2019-09-16 Thread Sebastian Graf
Hi Shayne,

Sorry to hear that! We didn't consider modularity at all and I would be
happy to try to refactor in a way that would allow `ghc-lib-parser` to be
properly separated again.
I'm fairly certain that I didn't directly touch anything parser related,
but apparently the new cyclic import of PmOracle within TcRnTypes (which is
also exposed from `ghc-lib-parser`) pulled in the other half of GHC.
I'll see if I can fix that tomorrow, if only by extracting a separate
`Types`-style module.

Cheers,
Sebastian

Am Mo., 16. Sept. 2019 um 22:04 Uhr schrieb Shayne Fletcher via ghc-devs <
ghc-devs@haskell.org>:

> Some time back, the `ghc-lib` project split into two targets :
> `ghc-lib-parser` for those projects that just need to produce syntax trees
> and `ghc-lib` (re-exporting `ghc-lib-parser` modules) having the remaining
> modules for projects that need to go on and distill parse trees to Core.
> The idea of course was to reduce build times for tools like hlint that only
> need parse trees.
>
> Roughly, `ghc-lib-parser` got about 200 files and `ghc-lib` 300. Today
> after landing `7915afc6bb9539a4534db99aeb6616a6d145918a`, "Encode shape
> information in `PmOracle`", `ghc-lib-parser` now needs 543 files and
> `ghc-lib` gets just 25.
>
> That may be just bad luck for `ghc-lib-parser` and the way it has to be
> but I thought I should at least mention the knock-on effect of this change
> on the modularity of the GHC API in case this consequence hasn't been
> considered?
>
> --
> *Shayne Fletcher*
> Language Engineer */* +1 917 699 7663
> *Digital Asset* , creators of *DAML
> *
>
> This message, and any attachments, is for the intended recipient(s) only,
> may contain information that is privileged, confidential and/or proprietary
> and subject to important terms and conditions available at
> http://www.digitalasset.com/emaildisclaimer.html. If you are not the
> intended recipient, please delete this message.
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: eqType modulo associated types?

2019-09-16 Thread Sebastian Graf
Hi Conal,

I've had success with `FamInstEnv.topNormaliseType` in the past. `eqType`
doesn't take `FamInstEnvs`, so I'm pretty sure it can't look through family
instances by itself.

Cheers,
Sebastian

Am Mo., 16. Sept. 2019 um 02:38 Uhr schrieb Conal Elliott :

> It looks to me like `eqType` accounts for type synonyms but not associated
> types. Is there a variant that compares modulo associated types, or perhaps
> a type normalizing operation to apply before `eqType`?
>
> Thanks, - Conal
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC: Policy on -O flags?

2019-08-27 Thread Sebastian Graf
Hi,

I used to think that the policy for being eligible for -O1 is that C must
be non-positive, e.g. that the compile times don't suffer at all.
Everything beyond that (well, given that R is positive) should be -O2 only.
There's precedent at least for Late Lambda Lifting (which is only run for
-O2) here: https://phabricator.haskell.org/D5224#147959.
Upon re-reading I see that Simon Marlow identified C=1 as the hard
threshold. Maybe there are other cases as well?

Personally, I like C=0 for the fact that it means the compiler will only
get faster over time. And any reasonably tuned release executable will do
-O2 anyway.

Cheers,
Sebastian


Am Di., 27. Aug. 2019 um 17:11 Uhr schrieb Andreas Klebinger <
klebinger.andr...@gmx.at>:

> Hello ghc-devs and haskell users.
>
> I'm looking for opinions on when an optimization should be enabled by
> default.
>
> -O is currently the base line for an optimized build.
> -O2 adds around 10-20% compile time for a few % (around 2% if I remember
> correctly) in performance for most things.
>
> The question is now if I implement a new optimization, making code R%
> faster but slowing
> down the compiler down by C% at which point should an optimization be:
>
> * Enabled by default (-O)
> * Enabled only at -O2
> * Disabled by default
>
> Cheap always beneficial things make sense for -O
> Expensive optimizations which add little make sense for -O2
>
> But where exactly is the line here?
> How much compile time is runtime worth?
>
> If something slows down the compiler by 1%/2%/5%
> and speeds up code by 0.5%/1%/2% which combinations make sense
> for -O, -O2?
>
> Can there even be a good policy with the -O/-O2 split?
>
> Personally I generally want code to either:
> * Typecheck/Run at all (-O0, -fno-code, repl)
> * Not blow through all my RAM when adding a few Ints while developing: -O ?
> * Make a reasonable tradeoff between runtime/compiletime: -O ?
> * Give me all you got: -O2 (-O9)
>
> The use case for -O0 is rather clear, so is -O2.
> But what do people consider the use case for -O
>
> What trade offs seem acceptable to you as a user of GHC?
>
> Is it ok for -O to become slower for faster runtimes? How much slower?
> Should all new improvements which might slow down compilation
> be pushed to -O2?
>
> Or does an ideal solution add new flags?
> Tell me what do you think.
>
> Cheers,
> Andreas Klebinger
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Linker error when adding a new source file

2019-08-23 Thread Sebastian Graf
Ah, you already tried a clean build. Nevermind...

Am Fr., 23. Aug. 2019 um 17:14 Uhr schrieb Sebastian Graf <
sgraf1...@gmail.com>:

> I recently experienced this when rebasing. Have you tried a clean build?
> `rm -rf _build` was enough for me, IIRC.
>
> Am Fr., 23. Aug. 2019 um 17:08 Uhr schrieb Brandon Allbery <
> allber...@gmail.com>:
>
>> From the looks of it, you're building with a bootstrap compiler (stage
>> 0). Does the build compiler need to have this in its runtime libraries for
>> the built compiler to work? This will require you to work it in in multiple
>> versions, the first providing it without using it and the next using the
>> provided one.
>>
>> On Fri, Aug 23, 2019 at 12:03 PM Jan van Brügge 
>> wrote:
>>
>>> Hi,
>>>
>>> in order to clean up my code, I've moved a bunch of stuff to a new
>>> source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But
>>> when trying to compile a clean build of GHC, I get a linker error:
>>>
>>> ```
>>>
>>> | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1
>>> more) => _build/stage0/bin/ghc
>>>
>>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814):
>>> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure'
>>>
>>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578):
>>> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure'
>>>
>>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8):
>>> error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure'
>>> collect2: Fehler: ld gab 1 als Ende-Status zurück
>>> `gcc' failed in phase `Linker'. (Exit code: 1)
>>>
>>> ```
>>>
>>> I had a look at the Wiki including the FAQ, but did not fine anything
>>> about that topic. Does someone know what I have to do for this to work?
>>>
>>> Cheers,
>>> Jan
>>>
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>>
>>
>> --
>> brandon s allbery kf8nh
>> allber...@gmail.com
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Linker error when adding a new source file

2019-08-23 Thread Sebastian Graf
I recently experienced this when rebasing. Have you tried a clean build?
`rm -rf _build` was enough for me, IIRC.

Am Fr., 23. Aug. 2019 um 17:08 Uhr schrieb Brandon Allbery <
allber...@gmail.com>:

> From the looks of it, you're building with a bootstrap compiler (stage 0).
> Does the build compiler need to have this in its runtime libraries for the
> built compiler to work? This will require you to work it in in multiple
> versions, the first providing it without using it and the next using the
> provided one.
>
> On Fri, Aug 23, 2019 at 12:03 PM Jan van Brügge  wrote:
>
>> Hi,
>>
>> in order to clean up my code, I've moved a bunch of stuff to a new
>> source file, `TcRowTys.hs` that works similar to `TcTypeNats.hs`. But
>> when trying to compile a clean build of GHC, I get a linker error:
>>
>> ```
>>
>> | Run Ghc LinkHs Stage0: _build/stage0/ghc/build/c/hschooks.o (and 1
>> more) => _build/stage0/bin/ghc
>>
>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.text+0x2814):
>> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure'
>>
>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(PrelInfo.o)(.data+0x578):
>> error: undefined reference to 'ghc_TcRowTys_rowTyCons_closure'
>>
>> _build/stage0/lib/../lib/x86_64-linux-ghc-8.6.5/ghc-8.9.0.20190722/libHSghc-8.9.0.20190722.a(TcHsType.o)(.data+0xdd8):
>> error: undefined reference to 'ghc_TcRowTys_rnilTyCon_closure'
>> collect2: Fehler: ld gab 1 als Ende-Status zurück
>> `gcc' failed in phase `Linker'. (Exit code: 1)
>>
>> ```
>>
>> I had a look at the Wiki including the FAQ, but did not fine anything
>> about that topic. Does someone know what I have to do for this to work?
>>
>> Cheers,
>> Jan
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>
>
> --
> brandon s allbery kf8nh
> allber...@gmail.com
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: PseudoOps in primops.txt.pp

2019-08-11 Thread Sebastian Graf
This turned out to be rather lengthy and ambivalent, but my current TLDR;
of this is that GHC.Magic Ids could all be PseudoOps, because we don't use
their definitions anyway.

---

Regarding 2., the answer has been right before my eyes in the form of Note
[ghcPrimIds (aka pseudoops)] and Note [magicIds]. The most important
difference I guess is that we can give meaningful, yet forgetful
definitions for functions in GHC.Magic, whereas we can't for proper
pseudoops.

Note [ghcPrimIds (aka pseudoops)] also answers 3.: IIUC if PseudoOps aren't
free abstractions already (proxy#), we try to inline them immediately in
Core. For example `noinline`, which is never inlined, could never be a
PseudoOp. As a side note: `noinline` seems to lack proper handling in the
demand analyser. For example, `noinline id x` should be detected as strict
in `x` by unleashing `id`s strictness signature. Not sure if we currently
do that.

The "What are PseudoOps" part of 1. is thus mostly resolved: PseudoOps are
functions with special semantics that can be lowered or erased in Core or
STG, so we will never have to think about generating code for them. We
still need to treat them specially, because we have no way to encode their
semantics at the source level. Examples (these are all current PseudoOps):

   - `seq` works on functions, but Haskell's `case` doesn't
   - `proxy#` is a symbolic inhabitant of `Proxy#`, which will be erased in
   code generation. I guess with -XUnliftedNewtypes we can finally define
   `Proxy#` in source Haskell as `newtype Proxy# a = Proxy# (# #)`
   - `unsafeCoerce#` can only be erased when going to STG, where we don't
   type check as part of linting.
   - `coerce` gets translated to casts as part of desugaring.
   - `nullAddr#` get inlined immediately to corresponding literal in Core.
   This is so that source syntax doesn't have to introduce a new literal.

Similarly, the definitions of GHC.Magic all seem to vanish after CorePrep.
In fact, I begin to think that GHC.Magic is just a subset of PseudoOps that
have semantics expressible in source Haskell (thus have a meaningful
definition). Which somewhat contradicts my observation above that
`noinline` couldn't be a PseudoOp: Clearly it could, because it is lowered
to id by the time we go to STG. This lowering (even in higher-order
situations, which is why we actually don't need the definition) seems to be
the whole point about having the compiler be aware of these special
identifiers.

So, for a concrete question: What are the reasons that we don't make i.e.
`lazy` a PseudoOp?


Am So., 11. Aug. 2019 um 12:42 Uhr schrieb Sebastian Graf <
sgraf1...@gmail.com>:

> Hey fellow devs,
>
> While implementing new PseudoOps, a couple of questions popped up:
>
>1. What are PseudoOps? When do we want to declare one? There doesn't
>seem to be any documentation around them. I only figured out that I
>probably want a PseudoOp by comparing to PrimOps I thought would be lowered
>at a similar stage (i.e. somewhere in Core or STG).
>2. Why aren't GHC.Magic.{lazy,noinline,oneShot} PseudoOps?
>3. Since we have to set all the IdInfo for `seq` and `noinline`
>manually, why is this incomplete? I.e., I'd expect a useful strictness
>signature and arity for both of these.
>
> Thanks!
> Sebastian
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


PseudoOps in primops.txt.pp

2019-08-11 Thread Sebastian Graf
Hey fellow devs,

While implementing new PseudoOps, a couple of questions popped up:

   1. What are PseudoOps? When do we want to declare one? There doesn't
   seem to be any documentation around them. I only figured out that I
   probably want a PseudoOp by comparing to PrimOps I thought would be lowered
   at a similar stage (i.e. somewhere in Core or STG).
   2. Why aren't GHC.Magic.{lazy,noinline,oneShot} PseudoOps?
   3. Since we have to set all the IdInfo for `seq` and `noinline`
   manually, why is this incomplete? I.e., I'd expect a useful strictness
   signature and arity for both of these.

Thanks!
Sebastian
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Try haskell-ide-engine on GHC!

2019-07-26 Thread Sebastian Graf
Hey all,

What can I say, after few hours of on and off tinkering I got it to work!
The hover information is incredibly helpful, as is jump to definition. It
works even in modules with type and name errors!
The error information not so much (yet), at least not compared to the
shorter feedback loop of using ghcid.
Haven't used completions in anger yet, but it works quite well when fooling
around with it.

Great work, Zubin and Matthew! :)

As to my setup: I'm using VSCode Remote, so the language server will run on
my build VM which VSCode communicates with via SSH.
I'm using nix+home-manager to manage my configuration over there, so I had
to wrap the hie executable with the following script:

#! /usr/bin/env bash
. /etc/profile.d/nix.sh
nix-shell --pure /path/to/ghc.nix/ --run
/path/to/haskell-ide-engine/dist-newstyle/build/x86_64-linux/ghc-8.6.4/haskell-ide-engine-1.0.0.0/x/hie/build/hie/hie

Also the shellHook echo output from ghc.nix confuses the language server
protocol, so be sure to delete those 4 lines from ghc.nix/default.nix.

It takes quite a while to initialise the first time around. Be sure to look
at the output of the alanz.vscode-hie-server extension to see if there's
any progress being made.
Can only encourage you to try this out!

Best,
Sebastian


Am Do., 25. Juli 2019 um 12:21 Uhr schrieb Matthew Pickering <
matthewtpicker...@gmail.com>:

> Hi all,
>
> As some of you know I have been working on getting haskell-ide-engine
> working on GHC for the last few months. Perhaps now the branch is in a
> usable state where people can try it and report issues. All the basic
> features such as, hover, completion, error reporting, go to definition
> etc should work well. I suspect this will be enough for most
> developers.
>
> I have compiled a list of instructions about how to try out the branch.
>
> https://gist.github.com/mpickering/68ae458d2c426a29a7c1ddf798dbc793
>
> In the last few weeks Zubin has been a great help finishing some parts
> of the patch that I lost steam for and given it a much better chance
> of getting merged into the main repo before the end of the year.
>
> Cheers,
>
> Matt
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: a better workflow?

2019-07-24 Thread Sebastian Graf
I found that git worktree works rather well, even with submodules (well,
mostly. Even if it doesn't for some reason, you can still update and init
the submodules manually, losing sharing in the process).
See https://stackoverflow.com/a/31872051, in particular the GitHub links to
`wtas` alias.

I mostly do this:

$ cd ~/code/hs/ghc
$ cd pristine
$ git wtas ../pmcheck

and mostly just hack away. From time to time I seem to have issues because
of confused submodule references, but as I said above doing a `git
submodule update --init --recursive` fixes that. Cloning the root GHC
checkout is the most time-consuming step, after all.

Also I'm currently in the rather comfortable situation of having an 8 core
azure VM just for GHC dev, which is pretty amazing. Doing the same as Ben
here: Having a tmux open with one (or more) tab per checkout I'm working on
in parallel. VSCode is my editor of choice and seamlessly picks up any SSH
connection I throw at it. Can highly recommend that when you're on a rather
weak machine like a laptop or convertible.

Am Mi., 24. Juli 2019 um 14:03 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

>
>
> On Jul 23, 2019, at 10:48 PM, Daniel Gröber  wrote:
>
> I don't think you ever mentioned -- are you already using `git
> worktree` to get multiple source checkouts or are you working off a
> single build tree? I find using it essential to reducing context
> switching overhead.
>
>
> This is a good point. No, I'm not currently. Some post I read (actually, I
> think the manpage) said that `git worktree` and submodules don't mix, so I
> got scared off. Regardless, I don't think worktree will solve my problem
> exactly. It eliminates the annoyance of shuttling commits from one checkout
> to another, but that's not really a pain point for me. (Yes, it's a small
> annoyance, but I hit it only rarely, and it's quick to sort out.) Perhaps
> I'm missing something though about worktree that will allow more, e.g.,
> sharing of build products. Am I?
>
> Thanks!
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Workflow question (changing codegen)

2019-06-30 Thread Sebastian Graf
Re: git worktree: That's the workflow I'm currently using. It has its
problems with submodules, see
https://stackoverflow.com/questions/31871888/what-goes-wrong-when-using-git-worktree-with-git-submodules.
But you can make it work with this git alias from the first answer:
https://gitlab.com/clacke/gists/blob/0c4a0b6e10f7fbf15127339750a6ff490d9aa3c8/.config/git/config#L12.
Just go into your main checkout and do `git wtas ../T9876`. AFAIR it
interacts weirdly with MinGW's git or git for Windows, but nothing you
can't work around.

Anyway, I was hoping that one day hadrian will be smart enough to have a
build directory for each branch or something, so that I would only need one
checkout where I can switch between branches as needed. In the meantime,
`git wtas` does what I want.

Am Sa., 29. Juni 2019 um 21:53 Uhr schrieb Richard Eisenberg <
r...@richarde.dev>:

> Just to pass on something that looks cool (I haven't tried it myself yet):
> git worktree. It seems git can hang several different checkouts of a repo
> in different directories. This seems far superior to my current habit of
> having many clones of ghc, sometimes going through machinations to get
> commits from one place to another. The documentation for git worktree seems
> quite approachable, so you might find it useful. I plan on using it in the
> future.
>
> Richard
>
> > On Jun 29, 2019, at 8:24 AM, Ben Gamari  wrote:
> >
> > On June 28, 2019 5:09:45 AM EDT, "Ömer Sinan Ağacan" <
> omeraga...@gmail.com> wrote:
> >> Hi all,
> >>
> >> I'm currently going through this torturous process and I'm hoping that
> >> someone
> >> here will be able to help.
> >>
> >> I'm making changes in the codegen. My changes are currently buggy, and
> >> I need a
> >> working stage 1 compiler to be able to debug. Basically I need to build
> >> libraries using the branch my changes are based on, then build stage 1
> >> with my
> >> branch, so that I'll be able to build and run programs using stage 1
> >> that uses
> >> my codegen changes. The changes are compatible with the old codegen
> >> (i.e. no
> >> changes in calling conventions or anything like that) so this should
> >> work.
> >>
> >> Normally I do this
> >>
> >>   $ git checkout master
> >>   $ git distclean && ./boot && ./configure && make
> >>   $ git checkout my_branch
> >>   $ cd compiler; make 1
> >>
> >> This gives me stage 1 compiler that uses my buggy codegen changes, plus
> >> libraries built with the old and correct codegen.
> >>
> >> However the problem is I'm also adding a new file in my_branch, and the
> >> build
> >> system just doesn't register that fact, even after adding the line I
> >> added to
> >> compiler/ghc.cabal.in to compiler/ghc.cabal. So far the only way to fix
> >> this
> >> that I could find was to run ./configure again, then run make for a few
> >> seconds
> >> at the top level, then do `make 1` in compiler/. Unfortunately even
> >> that doesn't
> >> work when the master branch and my_branch have different dates, because
> >> `make`
> >> in master branch produces a different version than the `make` in
> >> my_branch, so
> >> the interface files become incompatible.
> >>
> >> Anyone have any ideas on how to proceed here?
> >>
> >> Thanks,
> >>
> >> Ömer
> >> ___
> >> ghc-devs mailing list
> >> ghc-devs@haskell.org
> >> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
> >
> > In general I think it is wise to avoid switching branches in a tree you
> are actively developing in. The cost of switching both in the compilation
> time that it implies and the uncertain state that it leaves the tree in is
> in my opinion too high. It you want to compare your change against master I
> would recommend using two working directories.
> >
> >
> > Cheers,
> >
> > - Ben
> >
> > ___
> > ghc-devs mailing list
> > ghc-devs@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Near-daily "Remote mirror update failed" e-mails from GitLab

2019-04-15 Thread Sebastian Graf
Hey,

sorry, I'm a little late to respond. I didn't push to GitHub, at least not 
consciously.

Let me know if you find that I screwed up somewhere.

Cheers,
Sebastian



Von: Ben Gamari 
Gesendet: Samstag, April 6, 2019 8:28 PM
An: Ryan Scott; Sebastian Graf
Cc: ghc-devs@haskell.org
Betreff: Re: Near-daily "Remote mirror update failed" e-mails from GitLab

Ryan Scott  writes:

> Almost every day now I receive an e-mail from GitLab titled "Remote
> mirror update failed", which contains something like:
>
> To
>
> ! [remote rejected] wip/dmd-arity -> wip/dmd-arity (cannot
> lock ref 'refs/heads/wip/dmd-arity': is at
> e1cc1254b81a7adadd8db77c7be625497264ab2b but expected
> f07b61d047967129a3ae0c56f8894d41c5a9b036)
>
> error: failed to push some refs to '[FILTERED]@github.com/ghc/ghc'
>
Hmm, how annoying. Strangely, the `wip/dmd-arity` branch currently
appears to be sitting at the same commit on GitLab and GitHub so I'm not
really sure what to do here.

sgraf, did you ever push a branch manually to the github.com/ghc/ghc
mirror?

Cheers,

- Ben

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Scaling back CI (for now)?

2019-02-02 Thread Sebastian Graf
Hi,

Am Sa., 2. Feb. 2019 um 16:09 Uhr schrieb Matthew Pickering <
matthewtpicker...@gmail.com>:

>
> All the other flavours should be run once the commit reaches master.
>
> Thoughts?
>

That's even better than my idea of only running them as nightlies. In favor!


> Cheers,
>
> Matt
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] You should try Hadrian

2019-01-29 Thread Sebastian Graf
Side note: On my Windows my machine, where I use the environment provided
by `stack exec --no-ghc-package-path bash`, I have to do `bash -c 'pushd .
&& . /etc/profile && popd && ./configure --enable-tarballs-autodownload'`
or something along those lines for some time now (probably since the boot
script has been rewritten to python?).

Am Mo., 28. Jan. 2019 um 00:44 Uhr schrieb Phyx :

> Hi Andrey
>
> On Sun, Jan 27, 2019 at 10:49 PM Andrey Mokhov <
> andrey.mok...@newcastle.ac.uk> wrote:
>
>> Hi Tamar,
>>
>>
>>
>> Here is the relevant bullet point from the README:
>>
>>
>>
>> > On Windows, if you do not want to install MSYS, you can
>>
>> > use the Stack-based build script (Stack provides a managed
>>
>> > MSYS environment), as described in these instructions.
>>
>> > If you don't mind installing MSYS yourself or already have it,
>>
>> > you can use the Cabal-based build script.
>>
>
> Yes, I was referring to the "My first build" heading which had a call to
> build.bat, but it seems my branch was just old and the file was updated 11
> days ago to use cabal instead of stack.
> Now the rest of the file also makes sense. Apologies for that, I thought I
> had updated
>
>
>>
>> This claim is based on my experience. Installing the MSYS environment has
>> never worked out smoothly for me. Doing this via Stack was indeed more
>> robust (especially, when struggling with building GHC on Windows CI!). Has
>> this been different in your experience?
>>
>>
>>
>
> Yes, stack does nothing special than just un-tar the binary distribution
> of msys2. The problem is that this binary distribution is not kept up to
> date unless things break. By that point they may have gotten so out of date
> that the distribution simply can't even be upgraded. e.g. A while ago they
> used a distribution that's so old it couldn't deal with pacman's
> invalidating old certificates, which means you couldn't use it to update
> ca-certificates.
>
> It also can't handle when msys upstream changes core dependencies. One
> such update is a change in march that introduced a cyclic dependency
> between catgets libcatgets and some packages. Or when they change the
> package layout as they did removing the old shell scripts and making
> Mingw32.exe and Mingw64.exe. I can name many more. The fact is the msys2
> installers are set up to work around these updates, or you must work around
> them when initializing the environment to fix these.
>
> And I see no evidence based on past issues that stack actually keeps their
> msys2 installs up to date. So I don't want to go into the business of
> managing stack msys2 issues for ghc builds.
>
> > I'm just confused when it was decided to switch the defaults,
>>
>> > and why, without any consultation.
>>
>>
>>
>> I’m not sure what you mean. Could you clarify? The file `doc/windows.md`
>> is 3 years old and hasn’t changed much since creation. The default build
>> script `build.bat` currently uses Cabal:
>>
>>
>>
>> ```
>>
>> rem By default on Windows we build Hadrian using Cabal
>>
>> hadrian/build.cabal.bat %*
>>
>> ```
>>
>
> Yes.. I'm pretty sure when I looked at it before today it was pointing to
> build.stack.bat, but that seems to be a two week old tree. So my fault
> there.
>
> Sorry, should have checked on gitlab!
>
> Regards,
> Tamar
>
>
>>
>>
>> P.S.: I’ve just noticed that `doc/windows.md` hasn’t been updated when
>> moving to GitLab, and created this MR to fix this:
>>
>>
>>
>> https://gitlab.haskell.org/ghc/ghc/merge_requests/239
>>
>>
>>
>> Please jump into the comments there if you’d like me to fix/clarify
>> anything.
>>
>>
>>
>> Thanks for reaching out!
>>
>>
>>
>> Cheers,
>>
>> Andrey
>>
>>
>>
>> *From:* Phyx [mailto:loneti...@gmail.com]
>> *Sent:* 27 January 2019 21:11
>> *To:* Andrey Mokhov ; Ben Gamari <
>> b...@well-typed.com>
>> *Cc:* GHC developers 
>> *Subject:* Re: [ANNOUNCE] You should try Hadrian
>>
>>
>>
>> Hi Andrey,
>>
>>
>>
>> I'm looking at
>> https://gitlab.haskell.org/ghc/ghc/blob/master/hadrian/README.md and
>> https://gitlab.haskell.org/ghc/ghc/blob/master/hadrian/doc/windows.md
>>
>> wondering why the default instructions for Windows are using stack, this
>> isn't currently the case.
>>
>>
>>
>> In order for ./boot and configure to work already you need to be in an
>> msys2 environment. So having stack install its own, un-updated msys2 is not
>> a workflow I would recommend.
>>
>>
>>
>> There's a dubious claim there that using stack is "more robust", what is
>> this claim based on?
>>
>> I'm just confused when it was decided to switch the defaults, and why,
>> without any consultation.
>>
>>
>>
>> Regards,
>>
>> Tamar
>>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Better perf

2018-12-14 Thread Sebastian Graf
Hey,

when going through Simon-nofib-notes, I stumbled over this thread from
March 2017, when I hadn't yet subscribed to this list:
https://mail.haskell.org/pipermail/ghc-devs/2017-March/013887.html.
Joachim and Simon were trying to pin-point seemingly random regressions and
improvements of ~5% to `binary-trees`.
It's hard to say in retrospect, but I suspect this is the same effect I
experienced in #15333 and I'm about to fix in #15999, e.g. that small
changes to allocations lead to big, uncorrelated jumps in runtime
performance due to GC.

Just wanted to record this here for posterity.

Cheers
Sebastian

Hi,
>
> Am Dienstag, den 07.03.2017, 22:55 + schrieb Simon Peyton Jones via
> ghc-devs:
> >* > But: binary-trees runtime increases by 5%.
> *> >* David: might you look to see if there is any obvious reason for this
> *>* regression?  We could just accept it, but it's always good to know
> *>* why, and to document it.
> *
> Turns out that my commit
> Add rule mapFB c (λx.x) = c
> fixed that 
> regression:https://perf.haskell.org/ghc/#revision/2fa44217c1d977297eefb0d6c6aed7e128ca
>
> Maybe there is just a performance cliff there, and these jumps don’t
> really mean anything.
>
> Greetings,
> Joachim
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Residency profiles

2018-12-09 Thread Sebastian Graf
Ah, I was only looking at `+RTS --help`, not the users guide. Silly me.

Am Do., 6. Dez. 2018 um 20:53 Uhr schrieb Simon Marlow :

> It is documented!
> https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/runtime_control.html#rts-flag--F%20%E2%9F%A8factor%E2%9F%A9
>
> On Thu, 6 Dec 2018 at 16:21, Sebastian Graf  wrote:
>
>> Hey,
>>
>> thanks, all! Measuring with `-A1M -F1` delivers much more reliable
>> residency numbers.
>> `-F` doesn't seem to be documented. From reading `rts/RtsFlags.c` and
>> `rts/sm/GC.c` I gather that it's the factor by which to multiply the number
>> of live bytes by to get the new old gen size?
>> So effectively, the old gen will 'overflow' on every minor GC, neat!
>>
>> Greetings
>> Sebastian
>>
>> Am Do., 6. Dez. 2018 um 12:52 Uhr schrieb Simon Peyton Jones via ghc-devs
>> :
>>
>>> |  Right. A parameter for fixing the nursery size would be easy to
>>> implement,
>>> |  I think. Just a new flag, then in GC.c:resize_nursery() use the flag
>>> as the
>>> |  nursery size.
>>>
>>> Super!  That would be v useful.
>>>
>>> |  "Max. residency" is really hard to measure (need to do very frequent
>>> GCs),
>>> |  perhaps a better question to ask is "residency when the program is in
>>> state
>>> |  S".
>>>
>>> Actually, Sebastian simply wants to see an accurate, reproducible
>>> residency profile, and doing frequent GCs might well be an acceptable
>>> cost.
>>>
>>> Simon
>>> ___
>>> ghc-devs mailing list
>>> ghc-devs@haskell.org
>>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>>
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Residency profiles

2018-12-06 Thread Sebastian Graf
Hey,

thanks, all! Measuring with `-A1M -F1` delivers much more reliable
residency numbers.
`-F` doesn't seem to be documented. From reading `rts/RtsFlags.c` and
`rts/sm/GC.c` I gather that it's the factor by which to multiply the number
of live bytes by to get the new old gen size?
So effectively, the old gen will 'overflow' on every minor GC, neat!

Greetings
Sebastian

Am Do., 6. Dez. 2018 um 12:52 Uhr schrieb Simon Peyton Jones via ghc-devs <
ghc-devs@haskell.org>:

> |  Right. A parameter for fixing the nursery size would be easy to
> implement,
> |  I think. Just a new flag, then in GC.c:resize_nursery() use the flag as
> the
> |  nursery size.
>
> Super!  That would be v useful.
>
> |  "Max. residency" is really hard to measure (need to do very frequent
> GCs),
> |  perhaps a better question to ask is "residency when the program is in
> state
> |  S".
>
> Actually, Sebastian simply wants to see an accurate, reproducible
> residency profile, and doing frequent GCs might well be an acceptable
> cost.
>
> Simon
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: perf.haskell.org functional again

2018-11-30 Thread Sebastian Graf
Hi,

just came here to resurrect the thread and point out that nofib currently
isn't really run in parallel:
https://ghc.haskell.org/trac/ghc/ticket/15976#ticket
Also it's unclear to me how that would be possible without rewriting the
whole benchmark harness in Python or Haskell, because make parallelism
would probably interleave outputs of different rules, giving nofib-analyse
a hard time.

Greetings
Sebastian

Am Mo., 30. Okt. 2017 um 00:14 Uhr schrieb Joachim Breitner <
m...@joachim-breitner.de>:

> Hi,
>
> Am Sonntag, den 29.10.2017, 23:58 +0100 schrieb Sebastian Graf:
> > Hi,
> >
> > just wanted to throw in the idea of parallelising the benchmark suite
> > (hurts to even write that, but cachegrind) to speed up the build, if
> > ever need be.
>
> https://github.com/nomeata/gipeda/blob/master/ghc/run-speed.sh#L143
>
> good idea indeed – I’ve been doing that since I started running
> cachegrind ;-)
>
>
> Joachim
>
> --
> Joachim Breitner
>   m...@joachim-breitner.de
>   http://www.joachim-breitner.de/
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Understanding UniqSupply

2018-07-23 Thread Sebastian Graf
Hi Simon,

>
>1. Judging from SimplCore, we probably want to `splitUniqSupply` after
>each iteration/transformation, either through a call to `splitUniqSupply`
>or `getUniqueSupplyM`. Is that right?
>
> I don’t understand the question.   If you use the same supply twice,
> you’ll get (precisely) the same uniques.  That may or may not be ok


I guess this was wrt. threading UniqSupply through each transformation vs.
splitting it before a transformation. We want to split or to thread,
otherwise we possibly re-use some Uniques, because it's a regular purely
functional data structure, as you noted. Each transformation will do its
own splitting/taking after that, so my question was probably bogus to begin
with.

Thanks, that cleared up a lot of things for me!


Am Mo., 23. Juli 2018 um 14:21 Uhr schrieb Simon Peyton Jones <
simo...@microsoft.com>:

> Some quick responses
>
>
>
> 1. *Splitting*
>
> What's the need for splitting anyway?
>
>
>
> Just so you can use uniques in a tree-like way, without threading the
> supply around.  No more than that.
>
>
>
> This is not needed everywhere.  For example, the Simplifier threads it
> thus:
>
>
>
> newtype SimplM result
>
>   =  SM  { unSM :: SimplTopEnv  -- Envt that does not change much
>
> -> UniqSupply   -- We thread the unique supply because
>
> -- constantly splitting it is rather
> expensive
>
> -> SimplCount
>
> -> IO (result, UniqSupply, SimplCount)}
>
>
>
> I suspect that (now that SimplM is in IO anyway) we could use an IORef
> instead, and maybe speed up the compiler.
>
>
>
> But perhaps not all uses are so simple to change.
>
>
>
> 2. *The* *tree*
>
>
>
> The crucial thing is that there /is/ a data structure – a tree, that is
> the unique supply. So if you have
>
>  f u s = ….(splitUniqueSupply us)…..(splitUniqueSupply us)….
>
> you’ll get the same trees in the two calls.  The supply is just a
> purely-functional tree.
>
>
>
> So, for example
>
>- The `unsafeInterleaveIO` makes it so that `genSym` is actually
>forced before any of the recursive calls to `mk_split` force their
>`genSym`, regardless of evaluation order
>
> I don’t think this is important, except perhaps to avoid creating a thunk.
>
>- This guarentees a certain partial order on produced uniques: Any
>parent `UniqSupply`'s `Unique` is calculated by a call to
>compiler/cbits/genSym.c#genSym() before any `Unique`s of its offsprings 
> are]
>- The order of `Unique`s on different off-springs of the same
>`UniqSupply` is determined by evaluation order as a result of
>`unsafeInterleaveIO`, much the same as when we create two different
>`UniqSupply`s by calls to `mkSplitUniqSupply`
>- So, `unfoldr (Just . takeUniqFromSupply) us) !! n` is always
>deterministic and strictly monotone, in the sense that even forcing the
>expression for n=2 before n=1 will have a lower `Unique` for n=1 than for
>n=2.
>
> I don’t think any of these points are important or relied on.  A different
> impl could behave differently.
>
>1. `takeUniqSupply` returns as 'tail' its first off-spring, whereas
>`uniqsFromSupply` always recurses into its second off-spring. By my
>intuition above, this shouldn't really make much of a difference, so what
>is the motivation for that?
>
> I think this is unimportant. I.e. it should be fine to change it.
>
>
>
>1. Judging from SimplCore, we probably want to `splitUniqSupply` after
>each iteration/transformation, either through a call to `splitUniqSupply`
>or `getUniqueSupplyM`. Is that right?
>
> I don’t understand the question.   If you use the same supply twice,
> you’ll get (precisely) the same uniques.  That may or may not be ok
>
>
>
> SImon
>
>
>
> *From:* ghc-devs  *On Behalf Of *Sebastian
> Graf
> *Sent:* 23 July 2018 12:06
> *To:* ghc-devs 
> *Subject:* Understanding UniqSupply
>
>
>
> Hi all,
>
>
>
> I'm trying to understand when it is necessary to `splitUniqSupply`, or
> even to create my own supply with `mkSplitUniqSupply`.
>
>
>
> First, my understanding of how `mkSplitUniqSupply` (
> https://hackage.haskell.org/package/ghc-8.4.1/docs/src/UniqSupply.html#mkSplitUniqSupply
> <https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhackage.haskell.org%2Fpackage%2Fghc-8.4.1%2Fdocs%2Fsrc%2FUniqSupply.html%23mkSplitUniqSupply=02%7C01%7Csimonpj%40microsoft.com%7C58b620e0f148448bbe0608d5f08c485e%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636679407654961407=cA6N%2FGzMVYKD0fAVkbG%2BC%2F%2F

Functor, Foldable and Traversable for Expr

2018-06-18 Thread Sebastian Graf
Hi everyone,

I'm repeatedly wondering why there are no `Functor`, `Foldable` and
`Traversable` instances for `Expr`.

Is this just by lack of motive?
I could help there: I was looking for a function that would tell me if an
expression mentions `makeStatic`. After spending some minutes searching in
the code base, I decided to roll my own thing in `CoreUtils`.
I really couldn't think about a good name, so I settled for
`anyReferenceMatching :: (b -> Bool) -> Expr b -> Bool` and realized that I
could generalize the function to `foldMapExpr :: Monoid m => (b -> m) ->
Expr b -> m`.

Occasionally this need pops up and I really want to avoid writing my own
traversals over the syntax tree. So, would anyone object to a patch
implementing these instances?

Thanks
Sebastian
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Curious demand in a function parameter

2018-03-25 Thread Sebastian Graf
Hey,

the problem is with eta-expansion in this case, I believe, or rather the
lack there-of.
Your recursive `f` is always bottoming out, which makes GHC not want to
eta-expand the RealWorld# parameter (Note [State hack and bottoming
functions] in CoreArity.hs is probably related).
If you change `f`s last branch to `return 2`, it's no longer (detectably)
bottoming out and you get the 'desired' behavior:

test.exe: Prelude.undefined
CallStack (from HasCallStack):
  error, called at libraries\base\GHC\Err.hs:79:14 in base:GHC.Err
  undefined, called at test.hs:25:7 in main:Main

Greetings,
Sebastian




2018-03-25 9:14 GMT+02:00 Ömer Sinan Ağacan :

> Hi,
>
> In this program
>
> {-# LANGUAGE MagicHash #-}
>
> module Lib where
>
> import Control.Exception
> import GHC.Exts
> import GHC.IO
>
> data Err = Err
>   deriving (Show)
> instance Exception Err
>
> f :: Int -> Int -> IO Int
> f x y | x > 0 = IO (raiseIO# (toException Err))
>   | y > 0 = return 1
>   | otherwise = return 2
>
> when I compile this with 8.4 -O2 I get a strict demand on `y`:
>
> f :: Int -> Int -> IO Int
> [GblId,
>  Arity=3,
>  Str=,
>  ...]
>
> but clearly `y` is not used on all code paths, so I don't understand why we
> have a strict demand here.
>
> I found this example in the comments around `raiseIO#`:
>
> -- raiseIO# needs to be a primop, because exceptions in the IO monad
> -- must be *precise* - we don't want the strictness analyser turning
> -- one kind of bottom into another, as it is allowed to do in pure
> code.
> --
> -- But we *do* want to know that it returns bottom after
> -- being applied to two arguments, so that this function is strict in y
> -- f x y | x>0   = raiseIO blah
> --   | y>0   = return 1
> --   | otherwise = return 2
>
> However it doesn't explain why we want be strict on `y`.
>
> Interestingly, when I try to make GHC generate a worker and a wrapper for
> this
> function to make the program fail by evaluating `y` eagerly I somehow got a
> lazy demand on `y`:
>
> {-# LANGUAGE MagicHash #-}
>
> module Main where
>
> import Control.Exception
> import GHC.Exts
> import GHC.IO
>
> data Err = Err
>   deriving (Show)
> instance Exception Err
>
> f :: Int -> Int -> IO Int
> f x y | x > 0 = IO (raiseIO# (toException Err))
>   | y > 0 = f x (y - 1)
>   | otherwise = f (x - 1) y
>
> main = f 1 undefined
>
> I was thinking that this program should fail with "undefined" instead of
> "Err",
> but the demand I got for `f` became:
>
> f :: Int -> Int -> IO Int
> [GblId,
>  Arity=2,
>  Str=,
>  ...]
>
> which makes sense to me. But I don't understand how my changes can change
> `y`s
> demand, and why the original demand is strict rather than lazy. Could
> anyone
> give me some pointers?
>
> Thanks
>
> Ömer
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: An idea for a different style of metaprogramming evaluation using the optimiser

2018-02-27 Thread Sebastian Graf
Hey Matt,

cool idea! Also it looks like such a tool could 'solve' stream fusion:
https://ghc.haskell.org/trac/ghc/ticket/915#comment:52

Greetings
Sebastian

2018-02-27 18:01 GMT+01:00 Joachim Breitner :

> Hi,
>
> something like this would be great. I don’t have a sense yet of what
> “something” should be like.
>
>
> Am Dienstag, den 27.02.2018, 09:59 + schrieb Matthew Pickering:
> > To go back to the power example, the recursive
> > condition would have to be an inductively defined natural (data N = Z
> > | S N) rather than an Int as the comparison operator for integers
> > can't be evaluated by the optimiser.
>
> Sure they can:
>
> $ cat ConstantFolding.hs
> {-# LANGUAGE TemplateHaskell #-}
> {-# OPTIONS_GHC -fplugin=Test.Inspection.Plugin #-}
> module ConstantFolding where
>
> import Test.Inspection
>
> ltInt :: Bool
> ltInt = (3::Int) > 2
>
> ltInteger :: Bool
> ltInteger = (3::Integer) > 2
>
> true :: Bool
> true = True
>
>
> inspect $ 'ltInt === 'true
> inspect $ 'ltInteger === 'true
>
> $ ghc -O ConstantFolding.hs
> [1 of 1] Compiling ConstantFolding  ( ConstantFolding.hs,
> ConstantFolding.o )
> ConstantFolding.hs:17:1: ltInt === true passed.
> ConstantFolding.hs:18:1: ltInteger === true passed.
> inspection testing successful
>   expected successes: 2
>
>
>
> As  an alternative with a maybe simpler user interface (and probably
> less power), I wonder if we can create a magic function
> > compileTimeWHNF :: a -> a
> or
> > compileTimeNF :: a -> a
> and a GHC core plugin (or eventually built-in thing) that finds these
> magic functions and evaluates their arguments, using the simplifier.
>
>
> Cheers,
> Joachim
>
> --
> Joachim Breitner
>   m...@joachim-breitner.de
>   http://www.joachim-breitner.de/
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How to load & parse an HI (interface) file?

2017-12-03 Thread Sebastian Graf
Hey,

there's this relatively recent thread on finding instances of a type class:

https://mail.haskell.org/pipermail/ghc-devs/2017-May/014217.html

I'm sorry, but I couldn't find a better archive for ghc-devs.

Enjoy
Sebastian

On Sun, Dec 3, 2017 at 3:43 AM, Brandon Allbery  wrote:

> The problem with the API is it's complex and can break between ghc
> versions.
> But --show-iface is even more fragile and prone to break between ghc
> versions.
> The history of the plugins package constitutes a record of both kinds of
> pain.
>
> On Sat, Dec 2, 2017 at 9:11 PM, Saurabh Nanda 
> wrote:
>
>> > I would be cautious about using the ghc-api hi file interfaces; hi
>> files turn out to interact with a
>> > lot of low-level parts in complex ways (even to the extent that they're
>> a large part of why ghc
>> > can't parallelize builds itself and attempts to change that have mostly
>> failed).
>>
>> Are you cautioning against using the GHC API (as opposed to the
>> --show-iface command line interface)
>> or using HI files themselves?
>>
>> -- Saurabh.
>>
>>
>> On Sun, Dec 3, 2017 at 2:04 AM, Brandon Allbery 
>> wrote:
>>
>>> I would be cautious about using the ghc-api hi file interfaces; hi files
>>> turn out to interact with a lot of low-level parts in complex ways (even to
>>> the extent that they're a large part of why ghc can't parallelize builds
>>> itself and attempts to change that have mostly failed).
>>>
>>> But if you must do this, you *really* want to have
>>> https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler ready to hand
>>> --- and go through it first so you have some idea of how it works; much of
>>> it is links to the lower level details (often straight into the source).
>>>
>>> On Sat, Dec 2, 2017 at 10:59 AM, Saurabh Nanda 
>>> wrote:
>>>
 (GHC newbie alert -- is this the right mailing list for these kind of
 questions?)

 I"m writing some code to figure out all the instances of particular
 type-classes and after exploring a lot of options (hlint, haskell-src-exts,
 annotations, doctests, etc), I realized that the compiler had already
 figured it out and written it to disk for me!

 More digging led me to https://www.stackage.org/haddo
 ck/lts-9.0/ghc-8.0.2/LoadIface.html#v:loadSrcInterface after which I
 got stuck. How does one call this function? Specifically:

 * What is SDoc and how to construct a reasonable value for this
 argument?
 * IsBootInterface would mostly be False, right?
 * What does `Maybe FastString` represent and how does one construct it?
 * Finally how does one evaluate the resulting monadic action to get
 access to the underlying `ModIface`?

 -- Saurabh.


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


>>>
>>>
>>> --
>>> brandon s allbery kf8nh   sine nomine
>>> associates
>>> allber...@gmail.com
>>> ballb...@sinenomine.net
>>> unix, openafs, kerberos, infrastructure, xmonad
>>> http://sinenomine.net
>>>
>>
>>
>>
>> --
>> http://www.saurabhnanda.com
>>
>
>
>
> --
> brandon s allbery kf8nh   sine nomine
> associates
> allber...@gmail.com
> ballb...@sinenomine.net
> unix, openafs, kerberos, infrastructure, xmonad
> http://sinenomine.net
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


  1   2   >