Re: [SPAM] Re: Analyzing Haskell call graph (was: Thread on Discourse - HIE file processing)

2023-08-09 Thread Sylvain Henry
Hi Tristan,

⁣I wouldn't do this with Core (cf inlining issue and issue associating what you 
find with source syntax).

I think you should use the output of the renamer instead. Either with a GHC 
plugin using `renamedResultAction` or just by dumping the renamed AST (fully 
qualified) with -ddump-rn-ast -ddump-to-file and grepping for the names you 
want.

Cheers,
Sylvain


Le 9 août 2023 à 21:07, à 21:07, Tristan Cacqueray  a 
écrit:
>
>On Mon, Jul 31, 2023 at 16:26 Tristan Cacqueray wrote:
>> On Mon, Jul 31, 2023 at 11:05 David Christiansen via ghc-devs wrote:
>>> Dear GHC devs,
>>>
>>> I think that having automated security advisory warnings from build
>tools
>>> is important for Haskell adoption in certain industries. This can be
>done
>>> based on build plans, but a package is really the wrong granularity
>- a
>>> large, widely-used package might export a little-used definition
>that is
>>> the subject of an advisory, and it would be good to warn only the
>users of
>>> said definition (cf base and readFloat).
>>>
>>> Tristan is exploring using HIE files to do this check, but I don't
>know if
>>> you read Discourse, where he posted the question:
>>>
>https://discourse.haskell.org/t/rfc-using-hie-files-to-list-external-declarations-for-cabal-audit/7147
>>>
>>
>> Thank you David for bringing this up here. One thing to note is that
>we
>> would need hie files for ghc libraries, as proposed in:
>>   https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1337
>>
>> Cheers,
>> -Tristan
>
>Dear GHC devs,
>
>To recap, the goal of this project is to check if a given declaration
>is
>used by a package. For example, I would like to check if such
>definition: "package:Module.name" is reachable from another module.
>
>In this post I list the considered options, and raise some questions
>about using the simplified core from .hi files.
>
>I would appreciate if you could have a look and help me figure out the
>remaining blockers. Note that I'm not very familiar with the GHC
>internals and how to properly read Core expressions, so any feedback
>would be appreciated.
>
>
># Context and Problem Statement
>
>We would like to check if a package is affected by a known
>vulnerability. Instead of looking at the build dependencies names and
>versions, we would like to search for individual functions. This is
>particularly important to avoid false alarm when a given vulnerability
>only appears in a rarely used declaration of a popular package.
>
>Therefor, we need a way to search the whole call graph to assert with
>confidence that a given declaration is not used (e.g. reachable).
>
>
># Considered Options
>
>To obtain the call graph data, the following options are considered:
>
>* .hie files produced when using the `-fwrite-ide-info` flag.
>* .modpack files produced by the [wpc-plugin][grin].
>* custom GHC plugin.
>* .hi files containing the simplified core when using the
>  `-fwrite-if-simplified-core` flag.
>
>
># Pros and Cons of the Options
>
>### Hie files
>
>This option is similar to what [weeder][weeder] already implements.
>However this file format is designed for IDE, and it may not be
>suitable
>for our problem. For example, RULES, deriving, RebindableSyntax and
>template haskell are not well captured.
>
>[weeder]: https://github.com/ocharles/weeder/
>
>### Modpack
>
>This option appears to work, but it seems overkill. I don't think we
>need to reach for STG representation.
>
>[grin]:
>https://github.com/grin-compiler/ghc-whole-program-compiler-project
>
>### Custom GHC plugin
>
>This option enables extra metadata to be collected, but if using the
>simplified core is enough, then it is just an extra step compared to
>using .hi files.
>
>### Hi files
>
>Using .hi files is the only option that doesn't require an extra
>compilation artifacts, the necessary files are already part of the
>packages.
>
>To collect hie files or files generated by a GHC plugin,
>ghc/cabal/stack
>all need some extra work:
>
>- ghc libraries doesn't ship hie files
>([issue!16901](https://gitlab.haskell.org/ghc/ghc/-/issues/16901)).
>- cabal needs recent changes for hie files
>([PR#9019](https://github.com/haskell/cabal/pull/9019)) and plugin
>artifacts ([PR#8662](https://github.com/haskell/cabal/pull/8662)).
>- stack doesn't seem to install hie files for global library.
>
>Moreover, creating artifacts with a plugin for ghc libraries may
>requires manual steps because these libraries are not built by the
>end user.
>
>Therefor, using .hi files is the most straightforward solution.
>
>
># Questions
>
>In this section I present the current implementation of
>[cabal-audit](https://github.com/TristanCacqueray/cabal-audit/).
>
>
>## Collecting dependencies from core
>
>In the
>[cabal-audit-core:CabalAudit.Core](https://github.com/TristanCacqueray/cabal-audit/blob/main/cabal-audit-core/src/CabalAudit/Core.hs)
>module I implemented the logic to extract the call graph from core
>expression into a list of declarations composed of
>  

Re: Performance of small allocations via prim ops

2023-04-12 Thread Sylvain Henry



One complication is that currently GHC has no way to know the value of
LARGE_OBJECT_THRESHOLD (which is a runtime system macro). Typically to
handle this sort of thing we use utils/deriveConstants to generate a
Haskell binding mirroring the value of the C declaration. However,
as GHC becomes runtime-retargetable we may need to revisit this design.


Since 
https://gitlab.haskell.org/ghc/ghc/-/commit/085983e63bfe6af23f8b85fbfcca8db4872d2f60 
(2021-03) we don't do this. We only read constants from the header file 
provided by the RTS unit. Adding one more constant for 
LARGE_OBJECT_THRESHOLD shouldn't be an issue.


Cheers

Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Almost all tests fail after 08bf28819b

2022-11-24 Thread Sylvain Henry

With devel2 only the static libs are built. Hence the RTS linker is used.

https://gitlab.haskell.org/ghc/ghc/-/commit/08bf28819b78e740550a73a90eda62cce8d21c#de77f4916e67137b0ad5e12cc6c2704c64313900 
made some symbols public (newArena, arenaAlloc, arenaFree) but they 
weren't added to rts/RtsSymbols.c so the RTS linker isn't aware of them.


We should have a CI job testing the static configuration. Wait, there is 
one, and tests have been failing with this error too: 
https://gitlab.haskell.org/ghc/ghc/-/jobs/1242286#L2859 Too bad it was a 
job allowed to fail :)




On 24/11/2022 05:46, Erdi, Gergo via ghc-devs wrote:

PUBLIC

Nope, still getting the same error after deleting all of _build. I'm also on 
AMD64 Linux. I've tried with GHC 9.2.5 and 9.4.3. For reference, my exact 
command line (after deleting _build) is:

./boot && ./configure && ./hadrian/build-stack --flavour=devel2 -j10  test 
--only="ann01"

-Original Message-
From: Matthew Farkas-Dyck  
Sent: Wednesday, November 23, 2022 2:29 PM

To: Erdi, Gergo
Cc:ghc-devs@haskell.org
Subject: Re: Almost all tests fail after 08bf28819b

I had the same problem. Deleting the _build directory and rebuilding solved it 
for me.

I'm also on amd64 Linux, by the by.

This email and any attachments are confidential and may also be privileged. If 
you are not the intended recipient, please delete all copies and notify the 
sender immediately. You may wish to refer to the incorporation details of 
Standard Chartered PLC, Standard Chartered Bank and their subsidiaries at 
https: //www.sc.com/en/our-locations

Where you have a Financial Markets relationship with Standard Chartered PLC, Standard 
Chartered Bank and their subsidiaries (the "Group"), information on the 
regulatory standards we adhere to and how it may affect you can be found in our 
Regulatory Compliance Statement at https: //www.sc.com/rcs/ and Regulatory Compliance 
Disclosures at http: //www.sc.com/rcs/fm

Insofar as this communication is not sent by the Global Research team and 
contains any market commentary, the market commentary has been prepared by the 
sales and/or trading desk of Standard Chartered Bank or its affiliate. It is 
not and does not constitute research material, independent research, 
recommendation or financial advice. Any market commentary is for information 
purpose only and shall not be relied on for any other purpose and is subject to 
the relevant disclaimers available at https: 
//www.sc.com/en/regulatory-disclosures/#market-disclaimer.

Insofar as this communication is sent by the Global Research team and contains 
any research materials prepared by members of the team, the research material 
is for information purpose only and shall not be relied on for any other 
purpose, and is subject to the relevant disclaimers available at https: 
//research.sc.com/research/api/application/static/terms-and-conditions.

Insofar as this e-mail contains the term sheet for a proposed transaction, by 
responding affirmatively to this e-mail, you agree that you have understood the 
terms and conditions in the attached term sheet and evaluated the merits and 
risks of the transaction. We may at times also request you to sign the term 
sheet to acknowledge the same.

Please visit https: //www.sc.com/en/regulatory-disclosures/dodd-frank/ for 
important information with respect to derivative products.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hadrian problem

2022-07-12 Thread Sylvain Henry

Hi Simon,

Matt should have fixed it with 
https://gitlab.haskell.org/ghc/ghc/-/merge_requests/8556


Sylvain


On 12/07/2022 14:24, Simon Peyton Jones wrote:
I'm in a GHC tree, built with Hadrian, I'm getting this red problem.  
But compilation has got way past compiling base.


why is it looking in my .ghc/... directory?   It should be looking in 
my build tree.


Simon

bash$ ~/code/HEAD-1/_build/ghc-stage1 -c Foo.hs
Loaded package environment from 
/home/simonpj/.ghc/x86_64-linux-9.5.20220628/environments/default

: cannot satisfy -package-id base-4.17.0.0
    (use -v for more information)

bash$ cat ~/code/HEAD-1/_build/ghc-stage1
"/home/simonpj/code/HEAD-1/_build/stage0/bin/ghc" 
"-no-global-package-db" "-package-db 
/home/simonpj/code/HEAD-1/_build/stage1/lib/package.conf.d" "$@"




___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hadrian

2022-07-11 Thread Sylvain Henry

Hi Simon,

You have to re-run `./configure` in cases like this. It's because 
`compiler/ghc.cabal` is generated from `compiler/ghc.cabal.in` by 
`./configure`. This isn't tracked by Hadrian.


>Surely that should not happen?I'll try make clean; but isn't this a bug?

Hopefully when `make` build system will be removed it should be easy to 
make Hadrian (instead of `./configure`) generate and track this file. In 
fact I already did this in a MR more than a year ago but it was blocked 
on make-removal.


Sylvain


On 11/07/2022 17:09, Simon Peyton Jones wrote:

(apols for premature send)

I am working on a branch of GHC, actually on !8210.  I have rebased on 
master.  Then I say

hadrian/build
and I get the log below.  It falls over saying
No generator for _build/stage0/compiler/build/GHC/Unit/Module/Name.hs.

Surely that should not happen?

I'll try make clean; but isn't this a bug?

Simon

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: What to do with gmp wasm fixes

2022-05-23 Thread Sylvain Henry

Hi Cheng,

Couldn't the changes be upstreamed into libgmp directly? Other projects 
may benefit from being able to compile libgmp into wasm. Or are the 
changes specific to GHC?


> - Send a PR to gmp-tarballs, including our patch (doesn't alter 
behavior on native archs) and the updated tarball build script


I'm not sure if it's still the case, but in the past we applied some 
patches to gmp before building it (to use fPIC and to remove the docs). 
So it should be possible to do it for wasm.


> - Give up gmp completely, only support native bignum for wasm32.

That's the solution we will use for the JS backend. For wasm, it would 
be great to compare performance between both native and gmp ghc-bignum 
backends. libgmp uses some asm code when it is directly compiled to 
x86-64 asm for example and afaict passing through wasm will make it use 
less optimized codes. It may make the gmp backend less relevant: only 
benchmarks will tell. I would ensure that everything works with 
ghc-bignum's native backend before worrying about using gmp.


Cheers,
Sylvain


On 20/05/2022 13:43, Cheng Shao wrote:

Hi all,

The ghc wasm32-wasi build needs to patch gmp. Currently, our working
branch uses a fork of gmp-tarballs that includes the patch into the
tarball, but at some point we need to upstream it. What's the best way
to add these fixes?

- Send a PR to gmp-tarballs, including our patch (doesn't alter
behavior on native archs) and the updated tarball build script
- Don't touch gmp-tarballs, use "system" gmp, so the wasm32-wasi gmp
build process is decoupled from ghc build
- Give up gmp completely, only support native bignum for wasm32.

Cheers.
Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


"Modularizing GHC" paper

2022-05-04 Thread Sylvain Henry

Hi GHC devs,

With John Ericson and Jeffrey Young we wrote a paper about the 
modularization of GHC. It gives a global picture for the refactorings we 
have been performing (c.f. e.g. #17957) and some potential future ones.


Announce blog post: 
https://hsyl20.fr/home/posts/2022-05-03-modularizing-ghc-paper.html

Paper: https://hsyl20.fr/home/files/papers/2022-ghc-modularity.pdf
Discussion on Reddit: 
https://www.reddit.com/r/haskell/comments/uhdu4l/modularizing_ghc_paper/


We welcome any feedback, here or on reddit.

Cheers,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [clash-language] Avoiding `OtherCon []` unfoldings, restoring definitions from unfoldings

2022-04-01 Thread Sylvain Henry
The unfolding is present if you add `-fno-omit-interface-pragmas` and 
dump with `-ddump-simpl`. CorePrep drops unfoldings, see Note [Drop 
unfoldings and rules] in GHC.CoreToStg.Prep.


The logic for unfolding exposition by Tidy is now in: 
https://gitlab.haskell.org/ghc/ghc/-/blob/a952dd80d40bf6b67194a44ff71d7bf75957d29e/compiler/GHC/Driver/Config/Tidy.hs#L40


If you use the GHC API you can now invoke Tidy with different TidyOpts.


On 01/04/2022 15:37, ÉRDI Gergő wrote:
This doesn't quite match my experience. For example, the following 
toplevel definition gets an `OtherCon []` unfolding:


nonEmptySubsequences :: [a] -> [[a]]
nonEmptySubsequences [] = []
nonEmptySubsequences (x:xs) = [x] : foldr f [] (nonEmptySubsequences xs)
  where
    f ys r = ys : (x:ys) : r

as can be seen with:

$ ghc -fforce-recomp -fexpose-all-unfoldings -ddump-prep 
-dsuppress-uniques A.hs


-- RHS size: {terms: 37, types: 55, coercions: 0, joins: 0/6}
A.nonEmptySubsequences [Occ=LoopBreaker] :: forall a. [a] -> [[a]]
[GblId, Arity=1, Unf=OtherCon []]
A.nonEmptySubsequences
  = \ (@ a) (ds [Occ=Once1!] :: [a]) -> ...


So this is not a lifted `case`-bound variable, but a bonafide 
user-originating toplevel definition. And its value also isn't bottom.



On Fri, 1 Apr 2022, Christiaan Baaij wrote:

So if I understand correctly, OtherCon is only created 
here:https://gitlab.haskell.org/ghc/ghc/-/blob/a952dd80d40bf6b67194a44ff71d7bf75957d29e/co

mpiler/GHC/Core/Opt/Simplify.hs#L3071-3077

simplAlt env _ imposs_deflt_cons case_bndr' cont' (Alt DEFAULT bndrs 
rhs)

  = assert (null bndrs) $
    do  { let env' = addBinderUnfolding env case_bndr'
                                        (mkOtherCon imposs_deflt_cons)
                -- Record the constructors that the case-binder 
*can't* be.

        ; rhs' <- simplExprC env' rhs cont'
        ; return (Alt DEFAULT [] rhs') }

What you should know is that in Core case-expressions are actually 
more like:


case scrut as b of alts

where `b` binds the evaluated result of `scrut.

So if I am to understand the `simplAlt` code correctly, `case_bndr'` 
is the binder for

the evaluated result of `scrut`.
And what is recorded in the unfolding is that once we get to the 
DEFAULT pattern, we
know that `case_bndr'` cannot be the constructors in 
`imposs_deflt_cons` (probably the

constructor matched by the other alternatives).

Now... there's also a FloutOut pass, which might have floated that 
`case_bndr'` to the

TopLevel.
And I think that is what you're seeing, and I think you can simply 
ignore them.



Also... another thing that you should know is that 
-fexpose-all-unfoldings doesn't

actually expose *all* unfoldings.
Bottoming bindings are never exposed.
That's why in the Clash compiler we have the following code when loading
core-expressions from .hi 
fileshttps://github.com/clash-lang/clash-compiler/blob/cb93b418865e244da50e1d2bc85fbc01bf7

61f3f/clash-ghc/src-ghc/Clash/GHC/LoadInterfaceFiles.hs#L473-L481

loadExprFromTyThing :: CoreSyn.CoreBndr -> GHC.TyThing -> Maybe 
CoreSyn.CoreExpr

loadExprFromTyThing bndr tyThing = case tyThing of
  GHC.AnId _id | Var.isId _id ->
    let _idInfo    = Var.idInfo _id
        unfolding  = IdInfo.unfoldingInfo _idInfo
    in case unfolding of
      CoreSyn.CoreUnfolding {} ->
        Just (CoreSyn.unfoldingTemplate unfolding)
      CoreSyn.DFunUnfolding dfbndrs dc es ->
        Just (MkCore.mkCoreLams dfbndrs (MkCore.mkCoreConApps dc es))
      CoreSyn.NoUnfolding
#if MIN_VERSION_ghc(9,0,0)
        | Demand.isDeadEndSig $ IdInfo.strictnessInfo _idInfo
#else
        | Demand.isBottomingSig $ IdInfo.strictnessInfo _idInfo
#endif
        -> do
          let noUnfoldingErr = "no_unfolding " ++ showPpr 
unsafeGlobalDynFlags bndr
          Just (MkCore.mkAbsentErrorApp (Var.varType _id) 
noUnfoldingErr)

      _ -> Nothing
  _ -> Nothing

i.e. when we encounter a NoUnfolding with a bottoming demand 
signature, we conjure an

absentError out of thin air.


On Fri, 1 Apr 2022 at 10:05, ÉRDI Gergő  wrote:
  Hi,

  I'm CC-ing the Clash mailing list because I believe they should 
have
  encountered the same problem (and perhaps have found a solution 
to it

  already!).

  I'm trying to use `.hi` files compiled with 
`ExposeAllUnfoldings` set to
  reconstruct full Core bindings for further processing. By and 
large, this
  works, but I get tripped up on identifiers whose unfolding is 
only given
  as `OtherCon []`. It is unclear to me what is causing this -- 
some of them

  are recursive bindings while others are not.

  The problem, of course, is that if all I know about an 
identifier is that
  it is `OtherCon []`, that doesn't allow me to restore its 
definition. So

  is there a way to tell GHC to put "full" unfoldings everywhere in
  `ExposeAllUnfoldings` mode?

  Thanks,
          Gergo

  --
  You received this message because you are subscribed to the 
Google Groups

  "Clash - 

Re: Coping with multiple meanings of `<>`

2021-12-15 Thread Sylvain Henry

Hi Norman,

Usually in the compiler Semigoup's <> is imported qualified. But I agree 
it's ugly.


The trouble with Outputable's <> is that:
1) it doesn't have the same associativity as Semigroup's <>
2) <+> interacts weirdly with <> (cf 
https://mail.haskell.org/pipermail/libraries/2011-November/017066.html)


I have rediscovered this when trying to fix it 2 months ago: 
https://gitlab.haskell.org/hsyl20/ghc/-/commits/hsyl20/outputable-append


I have tried to add a new constructor to fix (2) 
https://gitlab.haskell.org/hsyl20/ghc/-/commit/5d09acf4825a816ddb2ca2ec7294639b969ff64b 
but it's still failing 
(https://gitlab.haskell.org/hsyl20/ghc/-/jobs/791114).


Any help fixing these issues would be appreciated :)

Cheers,
Sylvain


On 14/12/2021 20:23, Norman Ramsey wrote:

I find myself wanting to define instances of Semigroup (and Monoid)
in a file that also imports GHC.Utils.Outputable and its `<>` operation
on SDocs.  At the moment I am dealing with the incompatibility by
hiding the Outputable version and instead of writing `s1 <> s2` I write
`hcat [s1, s2]`.  This workaround seems ugly and vaguely embarrassing.

How are others dealing with this issue?  Would it be sensible simply
to make SDoc an instance of Semigroup (and Monoid), or would we be
concerned about potential additional overhead at compile time?


Norman
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Compiling "primitive" with ghc head

2021-11-26 Thread Sylvain Henry
Hi,

We now always use Word64# to implement Word64.


You can find the patch for primitive and many other packages in head.hackage:

https://gitlab.haskell.org/ghc/head.hackage/-/blob/master/patches/primitive-0.7.3.0.patch


Cheers,
Sylvain
⁣

Le 26 nov. 2021 à 20:35, à 20:35, Harendra Kumar  a 
écrit:
>Forgot to add subject in the previous email.
>
>On Sat, 27 Nov 2021 at 01:01, Harendra Kumar 
>wrote:
>
>> Hi GHC devs,
>>
>> While compiling the primitive package using ghc head I ran into the
>> following error:
>>
>> Data/Primitive/Types.hs:265:870: error:
>> • Couldn't match type ‘Word64#’ with ‘Word#’
>>   Expected: Word64_#
>> Actual: Word64#
>> • In the fourth argument of ‘setWord64Array#’, namely ‘x#’
>>   In the first argument of ‘internal’, namely
>> ‘(setWord64Array# arr# i n x#)’
>>   In the first argument of ‘unsafeCoerce#’, namely
>> ‘(internal (setWord64Array# arr# i n x#))’
>> |
>> 265 | derivePrim(Word64, W64#, sIZEOF_WORD64, aLIGNMENT_WORD64,
>> |
>>
>>
>>
>> Any idea what this is and how it can be fixed?
>>
>> -harendra
>>
>
>
>
>
>___
>ghc-devs mailing list
>ghc-devs@haskell.org
>http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: -O* does more than what's in optLevelFlags?

2021-10-11 Thread Sylvain Henry

Hi,

Indeed the optimisation level is directly queried in a few places (e.g. 
grep "optLevel" and "opt_level"). Especially in Core opt pipeline 
getCoreToDo returns:


    core_todo =
 if opt_level == 0 then
   [ static_ptrs_float_outwards,
 CoreDoSimplify max_iter
 (base_mode { sm_phase = FinalPhase
    , sm_names = ["Non-opt simplification"] })
   , add_caller_ccs
   ]

 else {- opt_level >= 1 -} [...]

Somewhat relevant issue: https://gitlab.haskell.org/ghc/ghc/-/issues/17844


On 11/10/2021 06:08, Erdi, Gergo via ghc-devs wrote:


PUBLIC


What is set by -O* that is not included in optLevelFlags?

I would have thought that setting all the flags implied by, e.g., -O1, 
would be the same as setting -O1 itself. But this is not the case! 
Here are all the flags for O1 from optLevelFlags:


Opt_DoLambdaEtaExpansion

Opt_DoEtaReduction

Opt_LlvmTBAA

Opt_CallArity

Opt_Exitification

Opt_CaseMerge

Opt_CaseFolding

Opt_CmmElimCommonBlocks

Opt_CmmSink

Opt_CmmStaticPred

Opt_CSE

Opt_StgCSE

Opt_EnableRewriteRules

Opt_FloatIn

Opt_FullLaziness

Opt_IgnoreAsserts

Opt_Loopification

Opt_CfgBlocklayout

Opt_Specialise

Opt_CrossModuleSpecialise

Opt_InlineGenerics

Opt_Strictness

Opt_UnboxSmallStrictFields

Opt_CprAnal

Opt_WorkerWrapper

Opt_SolveConstantDicts

Opt_NumConstantFolding

And here are the ones that are set by O0 (the default) but not by O1:

Opt_IgnoreInterfacePragmas

Opt_OmitInterfacePragmas

So I expected that the following two invocations of GHC would be 
equivalent:


 1. ghc -O1
 2. ghc -fdo-lambda-eta-expansion -fdo-eta-reduction -fllvm-tbaa
-fcall-arity -fexitification -fcase-merge -fcase-folding
-fcmm-elim-common-blocks -fcmm-sink -fcmm-static-pred -fcse
-fstg-cse -fenable-rewrite-rules -ffloat-in -ffull-laziness
-fignore-asserts -floopification -fblock-layout-cfg -fspecialise
-fcross-module-specialise -finline-generics -fstrictness
-funbox-small-strict-fields -fcpr-anal -fworker-wrapper
-fsolve-constant-dicts -fnum-constant-folding
-fno-ignore-interface-pragmas -fno-omit-interface-pragmas

However, just by observing the output of -dshow-passes, I can see that 
while -O1 applies all these optimizations, the second version does 
NOT, even though I have turned on each and every one of them one by one.


Looking at compiler/GHC/Driver/Session.hs, it is not at all clear that 
-O* should do more than just setting the flags from optLevelFlags. 
What other flags are implied by -O*?



This email and any attachments are confidential and may also be 
privileged. If you are not the intended recipient, please delete all 
copies and notify the sender immediately. You may wish to refer to the 
incorporation details of Standard Chartered PLC, Standard Chartered 
Bank and their subsidiaries at https: //www.sc.com/en/our-locations


Where you have a Financial Markets relationship with Standard 
Chartered PLC, Standard Chartered Bank and their subsidiaries (the 
"Group"), information on the regulatory standards we adhere to and how 
it may affect you can be found in our Regulatory Compliance Statement 
at https: //www.sc.com/rcs/ and Regulatory Compliance Disclosures at 
http: //www.sc.com/rcs/fm


Insofar as this communication is not sent by the Global Research team 
and contains any market commentary, the market commentary has been 
prepared by the sales and/or trading desk of Standard Chartered Bank 
or its affiliate. It is not and does not constitute research material, 
independent research, recommendation or financial advice. Any market 
commentary is for information purpose only and shall not be relied on 
for any other purpose and is subject to the relevant disclaimers 
available at https: 
//www.sc.com/en/regulatory-disclosures/#market-disclaimer.


Insofar as this communication is sent by the Global Research team and 
contains any research materials prepared by members of the team, the 
research material is for information purpose only and shall not be 
relied on for any other purpose, and is subject to the relevant 
disclaimers available at https: 
//research.sc.com/research/api/application/static/terms-and-conditions.


Insofar as this e-mail contains the term sheet for a proposed 
transaction, by responding affirmatively to this e-mail, you agree 
that you have understood the terms and conditions in the attached term 
sheet and evaluated the merits and risks of the transaction. We may at 
times also request you to sign the term sheet to acknowledge the same.


Please visit https: //www.sc.com/en/regulatory-disclosures/dodd-frank/ 
for important information with respect to derivative products.


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: please help with ghc package-db flags

2021-09-28 Thread Sylvain Henry

Hi,

Could you try with `-package-env -` to disable package environment 
misfeature. Especially with Asterius as it may mix packages for the host 
with target packages that use WebAssembly... (I have been bitten by this 
2 years ago IIRC).


Sylvain


On 27/09/2021 22:50, Norman Ramsey wrote:

I've traced some troubles to a problem with GHC's response
to the -clear-package-db and -package-db flags.  I would very much
like to know if others can duplicate this issue.

All that is needed is for you to try the following commands, or
whatever variations may be appropriate for whatever ghc versions you
have installed on your system:

   ghc-pkg init /tmp/empty-package-db
   ghc-clear-package-db -package-db /tmp/empty-package-db/ -v
   ghc-9.0.1  -clear-package-db -package-db /tmp/empty-package-db/ -v
   ghc-8.10.7 -clear-package-db -package-db /tmp/empty-package-db/ -v

On my system at this present moment, ghc-8.10.7 respects the commands,
but ghc-9.0.1 does not.  The output from the `-v` options show what's
happening: if GHC finds any packages that are *not* "wired-in," then
something is broken.  As examples, I attach the sample outputs from my
own system.

I think the issue is being caused by something mysterious in my filesystem.
Last Friday, ghc 9.0.1 was respecting those flags.  But today it is not.
Before I try to figure out what is going on, I would *very* much
appreciate learning if anyone else can duplicate the issue.

Please try running GHC with an empty package database and let me know
what happens.


Norman

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: primitive (byte) string literal with length?

2021-08-24 Thread Sylvain Henry
Hi,

You can use cstringLength# which has a constant-folding rules for literals. 
That's what we use in GHC to build FastString literals.

⁣

Le 24 août 2021 à 06:34, à 06:34, Viktor Dukhovni  a 
écrit:
>
>Is there any GHC syntax for constructing a primitive string literal
>with a known (not hand coded) byte count?
>With `"some bytes"#` I get just the `Addr#` pointer, but not the size.
>
>If there's nothing available, would it be reasonable to introduce a new
>syntax?
>Perhaps:
>
>   "some bytes"## :: (# Addr#, Int# #)
>
>--
>   Viktor.
>
>___
>ghc-devs mailing list
>ghc-devs@haskell.org
>http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: GHC Module change database

2021-08-09 Thread Sylvain Henry

> Having a 'single source of truth' database will help with this.

Don't hesitate to make a PR to 
https://gitlab.haskell.org/haskell/ghc-api-compat so that at least the 
DB and the .cabal files live in the same repo.


I have created this package but I don't use it so anyone should really 
feel free to take it over.


Sylvain


On 08/08/2021 17:46, Ari Fordsham wrote:



AF


-- Forwarded message -
From: *Ari Fordsham* >

Date: Sun, 8 Aug 2021 at 16:43
Subject: Re: GHC Module change database
To: Vaibhav Sagar mailto:vaibhavsa...@gmail.com>>


That's where I generated it from :-)

It would be nice to generate that from the new database.

My main motivation was as follows: There are two possible paradigms 
for GHC API compatibility:


- Export old names for new modules - as in ghc-api-compat 

  This allows existing code to (kind of) 'just work' - but it doesn't 
help in managing that code as it gets extended to use new features


- Export new names to old modules
  I am thinking of working on a tool that does this.
  You need to rewrite code to use current modules (maybe 
https://github.com/facebookincubator/retrie/issues/33 
 will help), 
but then you can just target the latest API, and have compatibility 
built in.

  This seems to me better for active codebases.

  Having a 'single source of truth' database will help with this.

AF


On Sun, 8 Aug 2021 at 16:36, Vaibhav Sagar > wrote:


Hi Ari,

Have you seen https://gitlab.haskell.org/haskell/ghc-api-compat
?

Thanks,
Vaibhav

On Mon, Aug 9, 2021 at 1:34 AM Ari Fordsham mailto:arifords...@gmail.com>> wrote:

I've made a database of GHC API module changes.

https://github.com/AriFordsham/ghc-module-renames


This might be useful for automatic tooling.

It would be nice if this would be moved into the GHC Gitlab,
and kept up-to-date.

Ari Fordsham
___
ghc-devs mailing list
ghc-devs@haskell.org 
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Where else do I need to register fixity declarations?

2021-07-27 Thread Sylvain Henry
> What am I doing wrong? Is filling the `mi_fixities` field of the 
`ModIface` not enough to let importers see the correct fixities?


It seems like the renamer is looking for the fixities via `mi_fix_fn 
(mi_final_exts iface)`, not `mi_fixities`.


You should try to replace:

  , mi_final_exts = mi_final_exts empty

with:

  , mi_final_exts = (mi_final_exts empty){ mi_fix_fn = mkIfaceFixCache 
(mi_fixities partial)



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Loading a typechecked module and then using it immediately as a package

2021-06-29 Thread Sylvain Henry

Hi,

This part of the API is still awful and a bit in flux to make it less 
so. Modifying the UnitState directly isn't currently supported and seems 
difficult to do correctly (e.g. in your code snippet below you don't 
modify the moduleNameProvidersMap field), so it would probably be better 
to recreate the UnitState from scratch with mkUnitState/initUnitConfig.


You may also have a look to 
GHC.Driver.Backpack.{withBkpSession,buildUnit} in TcSession mode which 
registers virtual units for Backpack's .bkp files similarly to what you 
want to do. If you really don't want to use the filesystem at all, 
however, I think you will have to deal with moving MyLib from the HPT to 
the EPS and I don't know if it is easily feasible (Backpack resets these 
tables via withTempSession so that interface files are read from disk as 
usual instead iiuc).


Good luck :)
Sylvain


On 25/06/2021 11:17, Erdi, Gergo via ghc-devs wrote:


PUBLIC


Hi,

I have the following to .hs files:

 1. MyLib.hs:

module MyLib where
…

 2. Test.hs:

{-# LANGUAGE PackageImports  #-}
module Test where
import “my-pkg” MyLib
…

I would like to parse/typecheck/load MyLib.hs into some Unit 
“my-unit”, then add that to the package “my-pkg”, and then typecheck 
Test.hs, all in-proc using the GHC API, without putting any other 
files on disk. How do I do that?


What I tried is loading MyLib.hs after setting the homeUnitId in the 
DynFlags to “my-unit”, then editing the packageNameMap in the 
unitState of the DynFlags to may “my-pkg” to “my-unit”:


setHomeUnit :: (GhcMonad m) => UnitId -> m ()

setHomeUnit unitId = do

    dflags <- getSessionDynFlags

modifySession $ \h -> h{ hsc_dflags = dflags{ homeUnitId = unitId } }

registerUnit :: (GhcMonad m) => PackageName -> UnitId -> m ()

registerUnit pkg unitId = modifySession $ \h -> h{ hsc_dflags = 
addUnit $ hsc_dflags h }


  where

    addUnit dflags = dflags

    { unitState = let us = unitState dflags in us

    { packageNameMap = M.insert pkg (Indefinite unitId 
Nothing) $ packageNameMap us


    }

    }

pipeline = do

setHomeUnit myUnit

loadModule =<< typecheckModule =<< parseModule =<< modSumarryFor “MyLib”

registerUnit myPkg myUnit

setHomeUnit mainUnitId

typecheckModule =<< parseModule =<< modSumarryFor “Test”

Alas, this doesn’t work: the import of `MyLib` from `my-pkg` fails with:

input/linking/Test.hs:5:1: error:

    Could not find module ‘MyLib’

    It is not a module in the current program, or in any known package.

TBH I’m not very surprised that it didn’t work – that registerUnit 
function is doing some pretty deep surgery on the unitState that 
probably breaks several invariants. Still, I wasn’t able to find a 
better way – all the functions in GHC.Unit.State seem to be for 
querying only.


Thanks,

Gergo


This email and any attachments are confidential and may also be 
privileged. If you are not the intended recipient, please delete all 
copies and notify the sender immediately. You may wish to refer to the 
incorporation details of Standard Chartered PLC, Standard Chartered 
Bank and their subsidiaries at https: //www.sc.com/en/our-locations


Where you have a Financial Markets relationship with Standard 
Chartered PLC, Standard Chartered Bank and their subsidiaries (the 
"Group"), information on the regulatory standards we adhere to and how 
it may affect you can be found in our Regulatory Compliance Statement 
at https: //www.sc.com/rcs/ and Regulatory Compliance Disclosures at 
http: //www.sc.com/rcs/fm


Insofar as this communication is not sent by the Global Research team 
and contains any market commentary, the market commentary has been 
prepared by the sales and/or trading desk of Standard Chartered Bank 
or its affiliate. It is not and does not constitute research material, 
independent research, recommendation or financial advice. Any market 
commentary is for information purpose only and shall not be relied on 
for any other purpose and is subject to the relevant disclaimers 
available at https: 
//www.sc.com/en/regulatory-disclosures/#market-disclaimer.


Insofar as this communication is sent by the Global Research team and 
contains any research materials prepared by members of the team, the 
research material is for information purpose only and shall not be 
relied on for any other purpose, and is subject to the relevant 
disclaimers available at https: 
//research.sc.com/research/api/application/static/terms-and-conditions.


Insofar as this e-mail contains the term sheet for a proposed 
transaction, by responding affirmatively to this e-mail, you agree 
that you have understood the terms and conditions in the attached term 
sheet and evaluated the merits and risks of the transaction. We may at 
times also request you to sign the term sheet to acknowledge the same.


Please visit https: //www.sc.com/en/regulatory-disclosures/dodd-frank/ 
for important information with respect to derivative products.



GHC releases and versions

2021-05-28 Thread Sylvain Henry

Hi devs,

We currently have 4 branches of GHC in flight: 8.10, 9.0, 9.2 and HEAD

Latest releases:
- 8.10.4: 2021/02/06
- 9.0.1: 2021/02/04
- 9.2.1-alpha2: 2021/04/23

Considering:

1) 8.10 branch should be stable but a lot of stuff has been merged for 
8.10.5. To the point that 8.10.5 should probably be a "major release in 
the 8.10 series".


2) 9.0.1 is the latest major release but it shouldn't be used before 
9.0.2 is released because of bugs and regressions (9.0.2 branch contains 
a fix for a critical bug in 9.0.1 [1] since March).


3) We might release 9.2.1 and 9.0.2 approximately at the same time which 
will be quite confusing for users ("9.0.2 in the 9.0 series and 9.2.1 in 
the 9.2 series").


4) The first major number is meaningless.

Proposition:

Switch to A.B.C[.D] version scheme where:
- A: major release ("series")
- B: major release in the A series if B>0 and C=0; beta release if B=0
- C: bugfix release for A.B (if C>0) or beta release number (if B=0)
- D: date when building in tree, not for releases

It might be clearer exposed like this:

showVersion = \case
  [a,b,c,d] -> "Dev version of " ++ showVersion [a,b,c] ++ " built on " 
++ show d

  [a,0,c]   -> "beta " ++ show c ++ " in series " ++ show a
  [a,b,0]   -> "Major release " ++ show [a,b] ++ " in series " ++ show a
  [a,b,c]   -> "Bugfix release " ++ show c ++ " for " ++ show [a,b]
  _ -> undefined

> showVersion [9,0,1,20211028]
"Dev version of beta 1 in series 9 built on 20211028"
> showVersion [9,0,1]
"beta 1 in series 9"
> showVersion [9,0,2]
"beta 2 in series 9"
> showVersion [9,1,0]
"Major release [9,1] in series 9"
> showVersion [9,1,1]
"Bugfix release 1 for [9,1]"
> showVersion [9,2,0]
"Major release [9,2] in series 9"
> showVersion [10,1,0]
"Major release [10,1] in series 10"

Effects:

1) We could use C for bugfix versions which are to be released much 
faster than major versions.
2) B would be used for the old series we maintain. We backport a lot 
more stuff than before in older releases it seems, so it would be more 
PVP compliant to bump a major version number.

3) A would be used for the usual 6-month major releases.
4) We could make major releases in the 8 series (e.g. 8.10.5 could be 
released as 8.11.0)
5) We could advertise 9.0.1 as a beta (as everyone seems to consider .1 
releases)
6) 9.2.1 final could be released either as 9.3 (next major in the 9 
series if we just forget about 9.0.* and 9.2.*) or as 10.1.0 (first 
major in the 10 series).
7) No difference anymore between even/odd version numbers (for reference 
the current scheme is explained in [2])


Any thoughts?
Sylvain


[1] https://mail.haskell.org/pipermail/haskell-cafe/2021-March/133540.html
[2] https://gitlab.haskell.org/ghc/ghc/-/wikis/working-conventions/releases

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Errors in haddock git fetch

2021-05-27 Thread Sylvain Henry
error: cannot lock ref  'refs/remotes/origin/wip/hsyl20/dynflags': 
'refs/remotes/origin/wip/hsyl20/dynflags/exception' exists; cannot 
create 'refs/remotes/origin/wip/hsyl20/dynflags'


This one is because I've removed some of my wip branches already merged 
upstream. Their names were conflicting with a new branch name 
(wip/hsyl20/dynflags).


I forgot that everyone fetches every /wip branch so now it conflicts for 
everyone... I need to remember to remove the branch when !5845 will be 
merged.


> Should I worry?

No. The submodule seems to be checked out correctly even with the error.

Sylvain

PS: I still think we shouldn't have that many wip branches in the main 
repositories (cf 
https://mail.haskell.org/pipermail/ghc-devs/2019-February/017031.html)



On 27/05/2021 15:40, Simon Peyton Jones via ghc-devs wrote:


I’m getting these errors from `git submodule update`.  Should I worry?

Simon

From https://gitlab.haskell.org/ghc/haddock 



* [new branch] az/T19834 -> origin/az/T19834

* [new branch] az/T19834-2 -> origin/az/T19834-2

* [new branch] az/T19845 -> origin/az/T19845

* [new branch] az/T19845-2 -> origin/az/T19845-2

* [new branch] az/T19845-3 -> origin/az/T19845-3

* [new branch] 
dependabot/npm_and_yarn/haddock-api/resources/html/hosted-git-info-2.8.9 
-> 
origin/dependabot/npm_and_yarn/haddock-api/resources/html/hosted-git-info-2.8.9


* [new branch] 
dependabot/npm_and_yarn/haddock-api/resources/html/lodash-4.17.21 -> 
origin/dependabot/npm_and_yarn/haddock-api/resources/html/lodash-4.17.21


* [new branch] dn/dn-driver-refactor-and-split -> 
origin/dn/dn-driver-refactor-and-split


   b4e7407b..c7281407 ghc-9.2    -> origin/ghc-9.2

   dabdee14..4f9088e4 ghc-head -> origin/ghc-head

* [new branch] wip/T18389-task-zero -> origin/wip/T18389-task-zero

+ 7d27ea7a...3b6a8774 wip/T19720 -> origin/wip/T19720  (forced update)

+ fe35fed3...40ba457f wip/adinapoli-align-ps-messages  -> 
origin/wip/adinapoli-align-ps-messages  (forced update)


* [new branch] wip/dn-driver-refactor-and-split -> 
origin/wip/dn-driver-refactor-and-split


error: cannot lock ref 'refs/remotes/origin/wip/hsyl20/dynflags': 
'refs/remotes/origin/wip/hsyl20/dynflags/exception' exists; cannot 
create 'refs/remotes/origin/wip/hsyl20/dynflags'


! [new branch] wip/hsyl20/dynflags -> origin/wip/hsyl20/dynflags  
(unable to update local ref)


* [new branch] wip/hsyl20/uncpp -> origin/wip/hsyl20/uncpp

Unable to fetch in submodule path 'utils/haddock'; trying to directly 
fetch 4f9088e4b04e52ca510b55a78048c9230537e449:


Submodule path 'utils/haddock': checked out 
'4f9088e4b04e52ca510b55a78048c9230537e449'


simonpj@MSRC-3645512:~/code/HEAD-7$


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Generalising KnowNat/Char/Symbol?

2021-03-16 Thread Sylvain Henry

Hi,

I would like to have a KnownWord constraint to implement a type-safe 
efficient sum type. For now [1] I have:


data V (vs :: [Type]) = Variant !Word Any

where Word is a tag used as an index in the vs list and Any a value 
(unsafeCoerced to the appropriate type).


Instead I would like to have something like:

data V (vs :: [Type]) = Variant (forall w. KnownWord w => Proxy w -> 
Index w vs)


Currently if I use KnownNat (instead of the proposed KnownWord), the 
code isn't very good because Natural equality is implemented using 
`naturalEq` which isn't inlined and we end up with sequences of 
comparisons instead of single case-expressions with unboxed literal 
alternatives.


I could probably implement KnownWord and the required stuff (axioms and 
whatnot), but then someone will want KnownInt and so on. So would it 
instead make sense to generalise the different "Known*" we currently 
have with:


class KnownValue t (v :: t) where valueSing :: SValue t v

newtype SValue t (v :: t) = SValue t

litVal :: KnownValue t v => proxy v -> t

type KnownNat = KnownValue Natural
type KnownChar = KnownValue Char
type KnownSymbol = KnownValue String
type KnownWord = KnownValue Word

Thoughts?
Sylvain

[1] 
https://hackage.haskell.org/package/haskus-utils-variant-3.1/docs/Haskus-Utils-Variant.html


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Build failure -- missing dependency? Help!

2021-03-15 Thread Sylvain Henry


Thank you! Don’t forget to comment it – especially because it is fake.


Done in https://gitlab.haskell.org/ghc/ghc/-/merge_requests/5265


Make build system doesn't respect package dependencies, only module 
dependencies (afaik)


Does Hadrian suffer from this malady too? Are the fake imports needed? 
Or can we sweep them away when we sweep away make?



No, Hadrian has other issues but not this one :)

Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Build failure -- missing dependency? Help!

2021-03-15 Thread Sylvain Henry

Hi Simon,

The issue is that:
1. Make build system doesn't respect package dependencies, only module 
dependencies (afaik)
2. The build system isn't aware that most modules implicitly depend on 
GHC.Num.Integer/Natural (to desugar Integer/Natural literals)


That's why we have several fake imports in `base` that look like:

> import GHC.Num.Integer () -- See Note [Depend on GHC.Num.Integer] in 
GHC.Base


Note [Depend on GHC.Num.Integer]


The Integer type is special because GHC.Iface.Tidy uses constructors in
GHC.Num.Integer to construct Integer literal values. Currently it reads the
interface file whether or not the current module *has* any Integer 
literals, so

it's important that GHC.Num.Integer is compiled before any other module.

(There's a hack in GHC to disable this for packages ghc-prim and ghc-bignum
which aren't allowed to contain any Integer literals.)

Likewise we implicitly need Integer when deriving things like Eq instances.

The danger is that if the build system doesn't know about the dependency
on Integer, it'll compile some base module before GHC.Num.Integer,
resulting in:
  Failed to load interface for ‘GHC.Num.Integer’
    There are files missing in the ‘ghc-bignum’ package,

Bottom line: we make GHC.Base depend on GHC.Num.Integer; and everything
else either depends on GHC.Base, or does not have NoImplicitPrelude
(and hence depends on Prelude).

Note: this is only a problem with the make-based build system. Hadrian 
doesn't
seem to interleave compilation of modules from separate packages and 
respects

the dependency between `base` and `ghc-bignum`.

So we should add a similar fake import into 
libraries/base/GHC/Exception/Type.hs-boot. I will open a MR.


Sylvain



On 14/03/2021 21:53, Simon Peyton Jones via ghc-devs wrote:


I’m getting this (with ‘sh validate –legacy’).  Oddly

  * It does not happen on HEAD
  * It does happen on wip/T19495, a tiny patch with one innocuous
change to GHC.Tc.Gen.HsType

I can’t see how my patch could possible cause “missing files” in 
ghc-bignum!


I’m guessing that there is a missing dependency that someone doesn’t 
show up in master, but does in my branch, randomly.


There’s something funny about ghc-bignum; it doesn’t seem to be a 
regular library


Can anyone help?

Thanks

Simon

"inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -O 
-H64m -Wall -fllvm-fill-undef-with-garbage    -Werror    -this-unit-id 
base-4.16.0.0 -hide-all-packages -package-env - -i -ilibraries/base/. 
-ilibraries/base/dist-install/build 
-Ilibraries/base/dist-install/build 
-ilibraries/base/dist-install/build/./autogen 
-Ilibraries/base/dist-install/build/./autogen -Ilibraries/base/include 
-Ilibraries/base/dist-install/build/include    -optP-include 
-optPlibraries/base/dist-install/build/./autogen/cabal_macros.h 
-package-id ghc-bignum-1.0 -package-id ghc-prim-0.8.0 -package-id rts 
-this-unit-id base -Wcompat -Wnoncanonical-monad-instances 
-XHaskell2010 -O -dcore-lint -ticky -Wwarn  -no-user-package-db 
-rtsopts -Wno-trustworthy-safe -Wno-deprecated-flags 
-Wnoncanonical-monad-instances  -outputdir 
libraries/base/dist-install/build  -dynamic-too -c 
libraries/base/./GHC/Exception/Type.hs-boot -o 
libraries/base/dist-install/build/GHC/Exception/Type.o-boot -dyno 
libraries/base/dist-install/build/GHC/Exception/Type.dyn_o-boot


Failed to load interface for ‘GHC.Num.Integer’

There are files missing in the ‘ghc-bignum’ package,

try running 'ghc-pkg check'.

Use -v (or `:set -v` in ghci) to see a list of the files searched for.

make[1]: *** [libraries/base/ghc.mk:4: 
libraries/base/dist-install/build/GHC/Exception/Type.o-boot] Error 1


make[1]: *** Waiting for unfinished jobs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Pointer-or-Int 63-bit representations for Integer

2021-03-08 Thread Sylvain Henry

Hi Chris,

It has been considered in the past. There are some traces in the wiki: 
https://gitlab.haskell.org/ghc/ghc/-/wikis/replacing-gmp-notes


>> The suggestion discussed by John Meacham 
, 
Lennart Augustsson 
, 
Simon Marlow 
 
and Bulat Ziganshin 
 
was to change the representation of Integer so the Int# does the work of 
S# and J#: the Int# could be either a pointer to the Bignum library 
array of limbs or, if the number of significant digits could fit into 
say, 31 bits, to use the extra bit as an indicator of that fact and hold 
the entire value in the Int#, thereby saving the memory from S# and J#.


It's not trivial because it requires a new runtime representation that 
is dynamically boxed or not.


> An unboxed sum might be an improvement? e.g. (# Int# | ByteArray# #) 
-- would this "kind of" approximate the approach described? I don't have 
a good intuition of what the memory layout would be like.


After the unariser pass, the unboxed sum becomes an unboxed tuple: (# 
Int# {-tag-}, Int#, ByteArray# #)

The two fields don't overlap because they don't have the same slot type.

In my early experiments before implementing ghc-bignum, performance got 
worse in some cases with this encoding iirc. It may be worth checking 
again if someone has time to do it :). Nowadays it should be easier as 
we can define pattern synonyms with INLINE pragmas to replace Integer's 
constructors.


Another issue we have with Integer/Natural is that we have to mark most 
operations NOINLINE to support constant-folding. To be fair benchmarks 
should take this into account.


Cheers,
Sylvain


On 08/03/2021 18:13, Chris Done wrote:

Hi all,

In OCaml's implementation, they use a well known 63-bit representation 
of ints to distinguish whether a given machine word is either a 
pointer or to be interpreted as an integer.


I was wondering whether anyone had considered the performance benefits 
of doing this for the venerable Integer type in Haskell? I.e. if the 
Int fits in 63-bits, just shift it and do regular arithmetic. If the 
result ever exceeds 63-bits, allocate a GMP/integer-simple integer and 
return a pointer to it. This way, for most applications--in my 
experience--integers don't really ever exceed 64-bit, so you would 
(possibly) pay a smaller cost than the pointer chasing involved in 
bignum arithmetic. Assumption: it's cheaper to do more CPU 
instructions than to allocate or wait for mainline memory.


This would need assistance from the GC to be able to recognize said 
bit flag.


As I understand the current implementation of integer-gimp, they also 
try to use an Int64 where possible using a constructor 
(https://hackage.haskell.org/package/integer-gmp-1.0.3.0/docs/src/GHC.Integer.Type.html#Integer 
), 
but I believe that the compiled code will still pointer chase through 
the constructor. Simple addition or subtraction, for example, is 24 
times slower in Integer than in Int for 100 iterations:


https://github.com/haskell-perf/numbers#addition 



An unboxed sum might be an improvement? e.g. (# Int# | ByteArray# #) 
-- would this "kind of" approximate the approach described? I don't 
have a good intuition of what the memory layout would be like.


Just pondering.

Cheers,

Chris

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Use of forall as a sigil

2020-12-03 Thread Sylvain Henry
I don't know if this has been discussed but couldn't we reuse the lambda 
abstraction syntax for this?


That is instead of writing: forall a ->
Write: \a ->

Sylvain


On 03/12/2020 17:21, Vladislav Zavialov wrote:
There is no *implicit* universal quantification in that example, but 
there is an explicit quantifier. It is written as follows:


  forall a ->

which is entirely analogous to:

  forall a.

in all ways other than the additional requirement to instantiate the 
type vatiable visibly at use sites.


- Vlad


On Thu, Dec 3, 2020, 19:12 Bryan Richter > wrote:


I must be confused, because it sounds like you are contradicting
yourself. :) In one sentence you say that there is no assumed
universal quantification going on, and in the next you say that
the function does indeed work for all types. Isn't that the
definition of universal quantification?

(We're definitely getting somewhere interesting!)

Den tors 3 dec. 2020 17:56Richard Eisenberg mailto:r...@richarde.dev>> skrev:




On Dec 3, 2020, at 10:23 AM, Bryan Richter mailto:b...@chreekat.net>> wrote:

Consider `forall a -> a -> a`. There's still an implicit
universal quantification that is assumed, right?


No, there isn't, and I think this is the central point of
confusion. A function of type `forall a -> a -> a` does work
for all types `a`. So I think the keyword is appropriate. The
only difference is that we must state what `a` is explicitly.
I thus respectfully disagree with


But somewhere, an author decided to reuse the same keyword to
herald a type argument. It seems they stopped thinking about
the meaning of the word itself, saw that it was syntactically
in the right spot, and borrowed it to mean something else.


Does this help clarify? And if it does, is there a place you
can direct us to where the point could be made more clearly? I
think you're far from the only one who has tripped here.

Richard

___
ghc-devs mailing list
ghc-devs@haskell.org 
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Perf test viewer

2020-10-14 Thread Sylvain Henry

Hello everyone,

Since testsuite performance results are stored into Git notes, they are 
more difficult to visualize. At least with values in .T files we could 
see the changes over time, but now increase/decrease are only indicated 
into commit messages (but not necessarily by how much). The only tool we 
have afaik is the perf_notes.py script [1] but it's not very interactive.


So, long story short, I've started another one which is more interactive 
(in the browser). An instance is running on my server: http://hsyl20.fr:4222


It should be updated every 2 hours with newer metrics/commits from CI. 
Sources are here: https://github.com/hsyl20/had


Any feedback/PR is welcome!

Cheers,
Sylvain

[1] 
https://gitlab.haskell.org/ghc/ghc/-/wikis/building/running-tests/performance-tests#comparing-commits



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: msys woes

2020-10-08 Thread Sylvain Henry

Could you share the contents of "missing-win32-tarballs" log file?

Thanks,
Sylvain

On 08/10/2020 16:51, Shayne Fletcher wrote:


Hi Phyx,

On Thu, Oct 8, 2020 at 9:10 AM Phyx > wrote:


> `./configure --enable-tarballs-autodownload` GHC build step on
Windows has been failing because repo.msys2.org


Afaik GHC doesn't rely ok repo.msys2.org 
for builds, only for mirroring. The primary url is haskell.org
 https://downloads.haskell.org/ghc/mingw/

So it's down time shouldn't have affected you (and works for me).


I should have mentioned... as always the situation is more complicated 
  - This is in the context of ghc-lib CI;
  - I don't have direct access to a windows box;

A procedure to reproduce it would be,
```
cd ghc
git fetch --tags && git checkout ghc-8.8.1-release
git submodule update --init --recursive
stack --stack-yaml hadrian/stack.yaml exec -- bash -c "./configure 
--enable-tarballs-autodownload"

```
Looking into the hadrian/stack/yaml on that tag, that's an 8.4.3 
resolverin case that's relevant.


Which url does it say is inaccessible?


Sadly, doesn't say
```
2020-10-08T14:41:46.4566084Z configure: loading site script 
/usr/local/etc/config.site
2020-10-08T14:41:46.5682736Z checking for gfind... no
2020-10-08T14:41:46.5700226Z checking for find... /usr/bin/find
2020-10-08T14:41:46.6978096Z checking for sort... /usr/bin/sort
2020-10-08T14:41:46.9667887Z checking for GHC Git commit id... inferred 
9c787d4d24f2b515934c8503ee2bbd7cfac4da20
2020-10-08T14:41:47.566Z checking for ghc... 
/c/Users/VssAdministrator/AppData/Local/Programs/stack/x86_64-windows/ghc-8.4.3/bin/ghc
2020-10-08T14:41:48.0265433Z checking version of ghc... 8.4.3
2020-10-08T14:41:49.2214006Z GHC path canonicalised to: 
c:/Users/VssAdministrator/AppData/Local/Programs/stack/x86_64-windows/ghc-8.4.3/bin/ghc
2020-10-08T14:41:49.8138282Z checking build system type... x86_64-w64-mingw32
2020-10-08T14:41:49.8152277Z checking host system type... x86_64-w64-mingw32
2020-10-08T14:41:49.8172851Z checking target system type... x86_64-w64-mingw32
2020-10-08T14:41:49.8197624Z Host platform inferred as: x86_64-unknown-mingw32
2020-10-08T14:41:50.1249990Z Target platform inferred as: x86_64-unknown-mingw32
2020-10-08T14:41:51.4787854Z GHC build  : x86_64-unknown-mingw32
2020-10-08T14:41:51.4788960Z GHC host   : x86_64-unknown-mingw32
2020-10-08T14:41:51.4790058Z GHC target : x86_64-unknown-mingw32
2020-10-08T14:41:51.4791914Z LLVM target: x86_64-unknown-windows
2020-10-08T14:41:51.6809080Z checking for path to top of build tree... 
D:/a/1/s/ghc
2020-10-08T14:41:51.7094005Z configure: Checking for Windows toolchain 
tarballs...
2020-10-08T14:41:53.0985704Z #=#=#
2020-10-08T14:41:53.0986841Z
2020-10-08T14:41:53.1745226Z ###
   26.4%
2020-10-08T14:41:53.1762650Z 
 100.0%
2020-10-08T14:41:53.4899178Z #=#=#
2020-10-08T14:41:53.5914142Z ##O#- #
2020-10-08T14:41:53.6973216Z ##O=#  #
2020-10-08T14:41:53.7049675Z #=#=-#  #
2020-10-08T14:41:53.7051106Z curl: (22) The requested URL returned error: 404 
Not Found
2020-10-08T14:41:53.7596464Z
2020-10-08T14:41:53.7600456Z ERROR: Download failed.
2020-10-08T14:41:53.7614446Z
2020-10-08T14:41:53.7615639Z Error fetching msys2 tarballs; see errors 
above. ```


--
Shayne Fletcher

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Weird "missing hi file" problem with a serializable Core patch

2020-09-17 Thread Sylvain Henry

By the time we do CorePrep, the hi files should already have been written.


I don't think so. When we generate real code we write the interface after the 
backend has generated the output object. See Note [Writing interface files] in 
GHC.Driver.Main

Cheers,
Sylvain

On 17/09/2020 12:17, Cheng Shao wrote:

Hi Ben,

The -ddump-if-trace output is attached here. The error is produced
when compiling GHC.Types in ghc-prim.


Note that interface files are written after the Core pipeline is run.

Sorry for the confusion, I didn't mean the Core simplifier pipeline. I
mean the "Core -> Iface -> Core" roundtrip I tried to perform using
the output of CorePrep. By the time we do CorePrep, the hi files
should already have been written.

On Wed, Sep 16, 2020 at 11:48 PM Ben Gamari  wrote:

Cheng Shao  writes:


Hi all,

Following a short chat in #ghc last week, I did a first attempt of
reusing existing Iface logic to implement serialization for
codegen-related Core. The implementation is included in the attached
patch (~100 loc). As a quick and dirty validation of whether it works,
I also modified the codegen pipeline logic to do a roundtrip: after
CorePrep, the Core bits are converted to Iface, then we immediately
convert it back and use it for later compiling.

With the patch applied, stage-1 GHC would produce a "missing hi file"
error like:

: Bad interface file: _build/stage1/libraries/ghc-prim/build/GHC/Types.hi
   _build/stage1/libraries/ghc-prim/build/GHC/Types.hi:
openBinaryFile: does not exist (No such file or directory)


Hi Cheng,

Which module is being compiled when this error is produced? Could you
provide -ddump-if-trace output for the failing compilation?


The error surprises me, since by the time we perform the Core-to-Core
roundtrip, the .hi file should already have been written to disk. Is
there anything obviously wrong with the implementation? I'd appreciate
any pointers or further questions, thanks a lot!


Note that interface files are written after the Core pipeline is run.

Cheers,

- Ben


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Parser depends on DynFlags, depends on Hooks, depends on TcM, DsM, ...

2020-09-10 Thread Sylvain Henry

Hi Sebastian,

Last month I tried to make a DynFlags-free parser. The branch is here: 
https://gitlab.haskell.org/hsyl20/ghc/-/commits/hsyl20/dynflags/parser 
(doesn't build iirc)


1) The input of the parser is almost DynFlags-free thanks to Alec's 
patch [1]. On that front, we just have to move `mkParserFlags` out of 
GHC.Parser. I would put it alongside other functions generating config 
datatypes from DynFlags in GHC.Driver.Config (added yesterday). It's 
done in my branch and it only required a bit of plumbing to fix 
`lexTokenStream` iirc.


2) The output of the parser is the issue, as you point out. The main 
issue is that it uses SDoc/ErrMsg which are dependent on DynFlags.


In the branch I've tried to avoid the use of SDoc by using ADTs to 
return errors and warnings so that the client of the parser would be 
responsible for converting them into SDoc if needed. This is the 
approach that we would like to generalize [2]. The ADT would look like 
[3] and the pretty-printing module like [4]. The idea was that 
ghc-lib-parser wouldn't integrate the pretty-printing module to avoid 
the dependencies.


I think it's the best interface (for IDEs and other tools) so we just 
have to complete the work :). The branch stalled because I've tried to 
avoid SDoc even in the pretty-printing module and used Doc instead of 
SDoc but it wasn't a good idea... I'll try to resume the work soon.


In the meantime I've been working on making Outputable/SDoc independent 
of DynFlags. If we merge [5] in some form then the last place where we 
use `sdocWithDynFlags` will be in CLabel's Outputable instance (to fix 
this I think we could just depend on the PprStyle (Asm or C) instead of 
querying the backend in the DynFlags). This could be another approach to 
make the parser almost as it is today independent of DynFlags. A 
side-effect of this work is that ghc-lib-parser could include the 
pretty-printing module too.


So to answer your question:

> Would you say it's reasonable to abstract the definition of `PState` 
over the `DynFlags` type?


We're close to remove the dependence on DynFlags so I would prefer this 
instead of trying to abstract over it.


The roadmap:

1. Make Outputable/SDoc independent of DynFlags
1.1 Remove sdocWithDynFlags used to query the platform (!3972)
1.2 Remove sdocWithDynFlags used to query the backend in CLabel's 
Outputable instance

1.3 Remove sdocWithDynFlags
2. Move mkParserFlags from GHC.Parser to GHC.Driver.Config
3. (Make the parser use ADTs to return errors/warnings)

Cheers,
Sylvain

[1] 
https://gitlab.haskell.org/ghc/ghc/-/commit/469fe6133646df5568c9486de2202124cb734242

[2] https://gitlab.haskell.org/ghc/ghc/-/wikis/Errors-as-(structured)-values
[3] 
https://gitlab.haskell.org/hsyl20/ghc/-/blob/hsyl20/dynflags/parser/compiler/GHC/Parser/Errors.hs
[4] 
https://gitlab.haskell.org/hsyl20/ghc/-/blob/hsyl20/dynflags/parser/compiler/GHC/Parser/Errors/Ppr.hs

[5] https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3972


On 10/09/2020 15:12, Sebastian Graf wrote:

Hey Sylvain,

In https://gitlab.haskell.org/ghc/ghc/-/merge_requests/3971 I had to 
fight once more with the transitive dependency set of the parser, the 
minimality of which is crucial for ghc-lib-parser 
 and tested by the 
CountParserDeps test.


I discovered that I need to make (parts of) `DsM` abstract, because it 
is transitively imported from the Parser for example through Parser.y 
-> Lexer.x -> DynFlags -> Hooks -> {DsM,TcM}.
Since you are our mastermind behind the "Tame DynFlags" initiative, 
I'd like to hear your opinion on where progress can be/is made on that 
front.


I see there is https://gitlab.haskell.org/ghc/ghc/-/issues/10961 and 
https://gitlab.haskell.org/ghc/ghc/-/issues/11301 which ask a related, 
but different question: They want a DynFlags-free interface, but I 
even want a DynFlags-free *module*.


Would you say it's reasonable to abstract the definition of `PState` 
over the `DynFlags` type? I think it's only used for pretty-printing 
messages, which is one of your specialties (the treatment of DynFlags 
in there, at least).
Anyway, can you think of or perhaps point me to an existing road map 
on that issue?


Thank you!
Sebastian
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Breakage on master

2020-08-19 Thread Sylvain Henry

I can't reproduce the issue. Is it on a specific branch?

If it's in a branch with new ASSERTs, you should just have to import 
GHC.Utils.Panic to fix the issue or is there something else?


Sylvain

On 19/08/2020 10:24, Simon Peyton Jones via ghc-devs wrote:

|  Strangely, `./validate --legacy --slow` also appears to work fine for
|  me
|  on 55fd1dc55990623dcf3b2e6143e766242315d757.
|
|  Simon, can you describe how you were previously building GHC?

./validate --legacy --fast

with this validate.mk (below).

But the issue is terribly simple: assertPprPanic is used (by ASSERT), but no 
longer imported (by many many modules) because they previously got it from 
Outputable.  How can this possibly work?  It certainly doesn't for me.

Would it be possible to revert the patch that broke this?  I'm fully stalled 
with no workaround.

Thanks

Simon



SRC_HC_OPTS= -O -H64m
GhcStage1HcOpts= -DDEBUG
GhcStage2HcOpts= -dcore-lint -ticky
GhcLibHcOpts   = -O -dcore-lint -ticky

BUILD_PROF_LIBS= NO
SplitSections  = NO
HADDOCK_DOCS   = NO
BUILD_SPHINX_HTML  = NO
BUILD_SPHINX_PDF   = NO
BUILD_MAN  = NO

LAX_DEPENDENCIES   = YES


|  -Original Message-
|  From: Ben Gamari 
|  Sent: 18 August 2020 20:04
|  To: Simon Peyton Jones ; GHC developers 
|  Subject: RE: Breakage on master
|
|  Ben Gamari  writes:
|
|  > Simon Peyton Jones  writes:
|  >
|  >> |  meantime the issue can be worked around by reverting
|  >> |  accbc242e555822a2060091af7188ce6e9b0144e.
|  >>
|  >> Alas, not so.
|  >>
|  >> git revert accbc242e555822a2060091af7188ce6e9b0144e
|  >> warning: Failed to merge submodule utils/haddock (commits don't
|  follow merge-base)
|  >> error: could not revert accbc242e5... DynFlags: disentangle
|  Outputable
|  >> hint: after resolving the conflicts, mark the corrected paths
|  >> hint: with 'git add ' or 'git rm '
|  >> hint: and commit the result with 'git commit'
|  >>
|  > Sigh, yes, this is what I was afraid of. Strangely, Hadrian's
|  validate
|  > flavour doesn't appear to be affected by the issue that you
|  reported.
|  > Are you using the make build system by any chance?
|  >
|  Strangely, `./validate --legacy --slow` also appears to work fine for
|  me
|  on 55fd1dc55990623dcf3b2e6143e766242315d757.
|
|  Simon, can you describe how you were previously building GHC?
|
|  Cheers,
|
|  - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Using a development snapshot of happy

2020-08-04 Thread Sylvain Henry

Hi,

For solution b, Happy doesn't have to be a submodule. You can add it to 
hadrian/stack.yaml if you build with stack. See 
https://gitlab.haskell.org/ghc/ghc/-/commit/90e0ab7d80d88463df97bc3514fc89d2ab9fcfca 
where I had to do this.  It may be possible to do the same for Cabal 
with hadrian/cabal.project but I've not tried it.


Cheers,
Sylvain


On 02/08/2020 09:43, Vladislav Zavialov wrote:

Hi ghc-devs,

I’m working on the unification of parsers for terms and types, and one of the 
things I’d really like to make use of is a feature I implemented in ‘happy’ in 
October 2019 (9 months ago):

   https://github.com/simonmar/happy/pull/153

It’s been merged upstream, but there has been no release of ‘happy’, despite 
repeated requests:

   1. I asked for a release in December: 
https://github.com/simonmar/happy/issues/164
   2. Ben asked for a release a month ago: 
https://github.com/simonmar/happy/issues/168

I see two solutions here:

   a) Find a co-maintainer for ‘happy’ who could make releases more frequently 
(I understand the current maintainers probably don’t have the time to do it).
   b) Use a development snapshot of ‘happy’ in GHC

Maybe we need to do both, but one reason I’d like to see (b) in particular 
happen is that I can imagine introducing more features to ‘happy’ for use in 
GHC, and it’d be nice not to wait for a release every time. For instance, there 
are some changes I’d like to make to happy/alex in order to implement #17750

So here are two questions I have:

   1. Are there any objections to this idea?
   2. If not, could someone more familiar with the build process guide me as to 
how this should be implemented? Do I add ‘happy’ as a submodule and change 
something in the ./configure script, or is there more to it? Do I need to 
modify make/hadrian, and if so, then how?

Thanks,
- Vlad
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ghc-lib] internal error after removal of integer-simple

2020-06-20 Thread Sylvain Henry

Hi,

I would think it's more related to the linear types patch given that it 
added ghc-prim:GHC.Types.One (wired-in). Could you open a ticket with a 
way to reproduce the failure?


Thanks,
Sylvain


On 19/06/2020 23:55, Shayne Fletcher via ghc-devs wrote:
With the recent MR that removes integer-simple in favor of ghc-bignum, 
I find that I get a runtime failure when I try to use ghc-lib to 
generate core:

```
# Running: stack     --no-terminal exec -- mini-compile 
examples/mini-compile/test/MiniCompileTest.hs


examples/mini-compile/test/MiniCompileTest.hs:66:5: error:
    * GHC internal error: `One' is not in scope during type checking, 
but it passed the renamer

      tcl_env of environment: [628 :-> ATcTyCon TrName :: *,
                               62b :-> APromotionErr RecDataConPE,
                               62e :-> APromotionErr RecDataConPE]
    * In the definition of data constructor `TrNameS'
      In the data declaration for `TrName'
   |
66 |   = TrNameS Addr#  -- Static
   |     ^
mini-compile: GHC internal error: `One' is not in scope during type 
checking, but it passed the renamer

tcl_env of environment: [628 :-> ATcTyCon TrName :: *,
                         62b :-> APromotionErr RecDataConPE,
                         62e :-> APromotionErr RecDataConPE]
```

Anyone have any pointers on what is going wrong and what I should be 
looking at?


--
*Shayne Fletcher*
Language Engineer */* +1 917 699 7663
*Digital Asset* , creators of *DAML 
*


This message, and any attachments, is for the intended recipient(s) 
only, may contain information that is privileged, confidential and/or 
proprietary and subject to important terms and conditions available at 
http://www.digitalasset.com/emaildisclaimer.html 
. If you are not the 
intended recipient, please delete this message.


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Receiving type information from environment instead of hardcoding.

2020-06-19 Thread Sylvain Henry

FYI ghc-bignum has been merged yesterday.

Cheers,
Sylvain

On 15/06/2020 11:28, Rinat Stryungis wrote:
In light of the mentioned patch, I prefer to freeze my activity about 
the unification of Nat and Natural up to the merging this patch. After 
that, I am going to rebase my branch and make MR. Thank you, Ben!


пн, 15 июн. 2020 г. в 00:32, Ben Gamari >:


Rinat Stryungis mailto:lazybone...@gmail.com>> writes:

> Hi. I have a question about a possible way of unification of Nat and
> Natural. I've almost done that, but only in case of using
integer-gmp.
> If I use integer-simple there is a completely different
definition of
> Natural.
>
> How I construct now naturalTyCon (to make `naturalTy` to use it
instead of
> `typeNatKind`) :
>
> ```naturalTyCon :: TyCon
> naturalTyCon = pcTyCon naturalTyConName Nothing []
[natSDataCon,natJDataCon]
>
> natSDataCon :: DataCon
> natSDataCon = pcDataCon natSDataConName [] [wordPrimTy] naturalTyCon
>
> etc...
> ```
> Now I have to check`DynFlags` in a few places to reimplement
`naturalTyCon`
> in case of using `integer-simple`.
>
> Is there a way to avoid hardcoding of `naturalTy`?
> My colleague said that it would be nice to get `naturalTy` from an
> environment by  something like `lookupTyCon`,
> but there are many functions whose don't use any environment
like functions
> from `typeNatTyCons` list in `GHC.Builtin.Types.Literals`.
>
> Now I am going to use `DynFlags` checking, but it looks like an
ugly way...

Note that all of this will be moot in a matter of days. The ghc-bignum
patch, which will ship in 8.12, removes integer-simple and uses a
consistent number representation across its various supported
backends.

In light of this, if I were you I would probably just settle for a
hack
in the meantime.

Cheers,

- Ben



--
Best regards.
Rinat Striungis

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Search in GitLab

2020-06-15 Thread Sylvain Henry

> Does Google index our repo?  Can I use Google to search it somehow?

In google you can type:

 site:gitlab.haskell.org "<>"

Cheers,
Sylvain


On 15/06/2020 11:48, Simon Peyton Jones via ghc-devs wrote:


Does anyone know how to search better in GitLab.

Currently I’m using the standard GitLab search.  I’m searching for

“<>”

where I intend the quotes meaning exactly that string as usual in a 
search term.  But I get lots of results mentioning loop, without the 
angle brackets.


Moreover I want to sort the results by date or ticket number, and I 
can’t see how to do that.


Does Google index our repo?  Can I use Google to search it somehow?

Thanks

Simon


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: keepAlive# primop

2020-05-26 Thread Sylvain Henry

Hi ghc-devs,

After a discussion today about `keepAlive#`, I think Option E [1] is 
even more appealing.


To recap the idea was to have keep-alive variable sets attached to 
case-expressions in Core. E.g. `case {k} x of ...`



1. One of the issue was the semantics of `keepAlive#`. `keepAlive#` is 
very intricate with evaluation. As Simon mentioned we want a semantics like:


keepAlive# k x ==> x `seq` touch# k `seq` x

with potentially a special `seq` to deal with diverging `x`.

With `case {k} x of ...` we have this semantics. Even if we throw the 
case alternatives if `x` diverges, we don't throw the keep-alive set so 
we're good.



2. Simon wanted to push `keepAlive#` into case-expressions. With this 
approach we should only have to fix case-of-case to take keep-alive sets 
into account.


case {k} (case {k2} x of .. -> a; ... -> b) of
C0 .. -> e1
C1 .. -> e2

===>

case {k,k2} x of
... -> case {k} a of { C0 .. -> e1; C1 ... -> e2 }
... -> case {k} b of { C0 .. -> e1; C1 ... -> e2 }


3. Compared to other approaches: we don't have to use stack frames (good 
for performance) and we don't have to deal with a continuation (good for 
Core optimizations, hence perf).


4. Implementing this approach is quite straightforward even if it 
modifies Core. I did it last month in [2]. This patch doesn't fully work 
yet with `-O` because some transformation (related to join points IIRC) 
doesn't take keep-alive sets into account but it should be pretty easy 
to fix if we want to use this approach.



Given how hard it is to come up with a good design/implementation of 
other approaches, this one strikes me as probably the more principled we 
have and yet it is relatively easy to implement. What do you think?


Cheers,
Sylvain


[1] 
https://gitlab.haskell.org/ghc/ghc/-/wikis/proposal/with-combinator#option-e-tag-core-case-expression-with-kept-alive-variables


[2] https://gitlab.haskell.org/hsyl20/ghc/-/commits/hsyl20-keepalive



On 13/04/2020 19:51, Ben Gamari wrote:

Ccing ghc-devs@ since this discussion is something of general interest
to the community.


Sylvain Henry  writes:


Simon, Ben,

I've been reading and thinking about `readRW#` issues which are very
related to issues we have with `keepAlive#` primop.

To recap, the problem is that we want some transformations (that Simon
has listed in [1]) to consider:

```
case runRW# f of ...

case keepAlive# k a of ...
```

as if they were really:

```
case f realWorld# of ...

case a of ...
```

BUT without breaking the semantics of runRW# and keepAlive#.

I have been thinking about a solution that I have described on the wiki:
https://gitlab.haskell.org/ghc/ghc/-/wikis/proposal/with-combinator#option-e-tag-core-case-expression-with-kept-alive-variables

The idea is to keep a set of variable names in each Core case-expression
that are kept alive during the evaluation of the scrutinee.

I think it would work very nicely with your `newState#` primop described
in [2], both for `runST` and for `unsafeDupablePerformIO` (details on
the wiki).

It requires a little more upfront work to adapt the code involving
case-expressions. But it will force us to review all transformations to
check if they are sound when keep-alive sets are not empty, which we
would have to do anyway if we implemented another option. We could start
by disabling transformations involving non-empty keep-alive sets and
iterate to enable the sound ones.

I would like your opinions on the approach. I may have totally missed
something.

Thanks for writing this down!

Indeed it is an interesting idea. However, as expressed on IRC, I
wonder whether this problem rises to the level where it warrants an
adaptation to our Core representation. It feels a bit like the
tail is wagging the dog here, especially given how the "tail" here
merely exists to support FFI.

That being said, this is one of the few options which remain on the
table that doesn't require changes to user code. Moreover, the
applicability to runRW# is quite intriguing.

Another (admittedly, more ad-hoc) option that would avoid modifying Core
would be to teach the simplifier about the class of
"continuation-passing" primops (e.g. `keepAlive#` and `runRW#`), allowing it
to push case analyses into the continuation argument. That is,

 case keepAlive# x expr of pat -> rhs

 ~>

 keepAlive# x (case expr of pat -> rhs)

Of course, doing this is a bit tricky since one must rewrite the
application of keepAlive# to ensure that the resulting application is
well-typed. Admittedly, this doesn't help the runRW# case (although this
could presumably be accommodated by touch#'ing the final state token in
the runRW# desugaring emitted by CorePrep).

On the whole, I'm not a fan of this ad-hoc option. It increases the
complexity of the simplifier all to support a single operation. By
comparison, the Core extension looks somewhat appealing.

Cheers,

- Ben


[1] 
https://git

Re: 8.12 plans

2020-05-23 Thread Sylvain Henry

Hi Ben,

ghc-bignum (!2231) is ready to be merged for 8.12. We are just waiting 
for bytestring/text maintainers to merge 2 simple patches.


Thanks,
Sylvain


On 05/05/2020 20:12, Ben Gamari wrote:

Hi everyone,

The time is again upon us to start thinking about release planning for
the next major release: GHC 8.12. In keeping with our 6-month release
schedule, I propose the following schedule:

  * Mid-June 2020: Aim to have all major features in the tree
  * Late-June 2020: Cut the ghc-8.12 branch
  * June - August 2020: 3 alpha releases
  * 1 September 2020: beta release
  * 25 September 2020: Final 8.12.1 release

So, if you have any major features which you would like to merge for
8.12, now is the time to start planning how to wrap them up in the next
month or so. As always, do let me know if you think this may be
problematic and we can discuss options.

Cheers,

- Ben


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hadrian build with DWARF information doesn't contain as much debug information as I would expect

2020-05-03 Thread Sylvain Henry

I wonder how gdb knows which shared objects to load and what addresses to use


Perhaps it reads /proc/PID/maps (or /proc/PID/map_files/*)?

Cheers,
Sylvain

On 03/05/2020 09:55, Matthew Pickering wrote:

Thanks Adam for the tip about dynamic linking, as always.

He was right that the debug information was in the relevant .so files
and that when I statically linked GHC the information was included (as
with cabal). The issue was my program which read the dwarf information
did not work properly for dynamically linked executables (and still
doesn't, I wonder how gdb knows which shared objects to load and what
addresses to use).

Cheers,

Matt

On Sun, May 3, 2020 at 8:20 AM Adam Sandberg Eriksson
 wrote:

I don't know how you read the DWARF info but maybe it's missing info from 
dynamic libraries? If your GHC is dynamically linked the library DWARF info 
might be available in their respective .so's.

Cheers,
Adam Sandberg Eriksson

On Sat, 2 May 2020, at 23:08, Matthew Pickering wrote:

I followed the instructions on the wiki to enable debug symbols in my
build of GHC.
(https://gitlab.haskell.org/ghc/ghc/-/wikis/building/hadrian#enabling-dwarf-debug-symbols)

So I added these flags to may hadrian.settings file

stage1.*.ghc.hs.opts += -g3
stage1.*.cabal.configure.opts += --disable-library-stripping
--disable-executable-stripping
stage1.ghc-bin.ghc.link.opts += -eventlog

The resulting executable has debug information in it for the
executable component but not for any of the libraries in including the
compiler library.

("../sysdeps/x86_64/start.S",Just 4414944,Just 4414987,139633489318136)
("init.c",Nothing,Nothing,139633489318136)
("../sysdeps/x86_64/crti.S",Nothing,Nothing,139633489318136)
("ghc/Main.hs",Just 4415312,Just 4615455,139633489318136)
("ghc/GHCi/Leak.hs",Just 4615480,Just 4623414,139633489318136)
("ghc/GHCi/UI.hs",Just 4623440,Just 5461990,139633489318136)
("ghc/GHCi/UI/Info.hs",Just 5461992,Just 5571230,139633489318136)
("ghc/GHCi/UI/Monad.hs",Just 5571232,Just 5679695,139633489318136)
("ghc/GHCi/UI/Tags.hs",Just 5679696,Just 5704775,139633489318136)
("ghc/GHCi/Util.hs",Just 24,Just 173,139633489318136)
("../sysdeps/x86_64/crtn.S",Nothing,Nothing,139633489318136)

I tried building a project with cabal and the resulting executable had
debug information for every file in the dependencies as well as the
main project.

So how do I convince hadrian to include the correct information? Is it
a bug in hadrian?

I checked the command line when building the library and `-g3` is passed.

Cheers,

Matt
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: T13456

2020-04-28 Thread Sylvain Henry

Simon

!2600 doesn't contain the fix introduced by !3121. You should rebase it.

Sylvain


On 28/04/2020 09:50, Simon Peyton Jones via ghc-devs wrote:

Ben

I'm still getting framework failures from the testsuite, as below.

But now it's not just me: it's CI!   See !2600 which is failing in this way.

It'd be good to nail this... it seems wrong to have to ignore framework 
failures when checking that a build validates.

Simon

|  -Original Message-
|  From: Simon Peyton Jones
|  Sent: 20 April 2020 21:42
|  To: Ben Gamari 
|  Subject: RE: T13456
|
|  Thanks!
|
|  | -Original Message-
|  | From: Ben Gamari 
|  | Sent: 20 April 2020 18:57
|  | To: Simon Peyton Jones ; ghc-devs 
|  | Subject: Re: T13456
|  |
|  | Simon Peyton Jones via ghc-devs  writes:
|  |
|  | > I'm getting this failure (below) from validate fairly consistently.
|  | > It is often silenced by adding an empty file
|  | > ghci/should_run/T13456.stderr But it's troubling.  Does anyone else
|  see
|  | this?  How can I debug it?
|  | >
|  | Indeed this is odd. I have not seen this in CI or my local builds. It's
|  | possible that I have seen it in local builds that were failing for
|  other
|  | reasons but ignored it.
|  |
|  | While I don't know why you are seeing these failures in general, the
|  fact
|  | that they are reported as framework failures is arguably a bug. I would
|  | argue that we should treat a non-existing .stderr file as we would an
|  | empty file. I've opened !3121 fixing this. Hopefully you will see a
|  more
|  | helpful error message with this patch.
|  |
|  | Cheers,
|  |
|  | - Ben
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Spam projects on gitlab

2020-04-23 Thread Sylvain Henry

Hi,

I've just noticed these spam projects on our gitlab:

- https://gitlab.haskell.org/craigonaldson/lumaslim
- https://gitlab.haskell.org/AnthonyMussen/survey-of-evianne-cream
- https://gitlab.haskell.org/RobertoHeard/robertoheard
- https://gitlab.haskell.org/salihagenter/commission-based-business-in-india

Can someone remove them?

Thanks,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: TcTypeNats

2020-04-14 Thread Sylvain Henry

Hi Simon,

Not an omission, it will be moved into GHC.Builtin.Types.Literals with 
the next renaming MR (!3072).


It is only imported by PrelInfo and GHC.IfaceToCore

Sylvain


On 14/04/2020 15:48, Simon Peyton Jones wrote:


Sylvain

TcTypeNats still exists in compiler/typecheck/

An omission?

Simon

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Module Renaming: GHC.Core.Op

2020-04-05 Thread Sylvain Henry

> That would work as well. But I still favour the renaming approach.

If no one opposes, I'll do the s/GHC.Core.Op/GHC.Core.Opt/ in the next 
renaming MR (after !2924 is merged).


Cheers,
Sylvain


On 04/04/2020 14:56, Andreas Klebinger wrote:

Thanks for the response Sylvain.

> put all the Core types in GHC.Core.Types and move everything
operation from GHC.Core.Op to GHC.Core?

That would work as well. But I still favour the renaming approach.

Almost all of these passes are optimization, and the few who are not are
just there to support
the optimizations so their placements still makes sense. To me anyway.

If people reject the renaming your suggestion would still be an
improvement over .Op though.

Cheers,
Andreas

Sylvain Henry schrieb am 03.04.2020 um 23:29:

Hi Andreas,

"Op" stands for "Operation" but it's not very obvious (ironically when
I started this renaming work one of the motivation was to avoid
ambiguous acronyms... failed).

The idea was to separate Core types from Core
transformations/analyses/passes. I couldn't find something better then
"Operation" to sum up the latter category but I concede it's not very
good.

But perhaps we should do the opposite as we're doing in GHC.Tc: put
all the Core types in GHC.Core.Types and move everything operation
from GHC.Core.Op to GHC.Core?

Cheers,
Sylvain


On 03/04/2020 22:26, Andreas Klebinger wrote:

Hello devs,

While I looked at the renaming a bit when proposed I only just realized
we seem to be using Op as a short name for optimize.

I find this very unintuitive. Can we spare another letter to make this
GHC.Core.Opt instead?

We use opt pretty much everywhere else in GHC already.

Cheers
Andreas


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Module Renaming: GHC.Core.Op

2020-04-03 Thread Sylvain Henry

Hi Andreas,

"Op" stands for "Operation" but it's not very obvious (ironically when I 
started this renaming work one of the motivation was to avoid ambiguous 
acronyms... failed).


The idea was to separate Core types from Core 
transformations/analyses/passes. I couldn't find something better then 
"Operation" to sum up the latter category but I concede it's not very good.


But perhaps we should do the opposite as we're doing in GHC.Tc: put all 
the Core types in GHC.Core.Types and move everything operation from 
GHC.Core.Op to GHC.Core?


Cheers,
Sylvain


On 03/04/2020 22:26, Andreas Klebinger wrote:

Hello devs,

While I looked at the renaming a bit when proposed I only just realized
we seem to be using Op as a short name for optimize.

I find this very unintuitive. Can we spare another letter to make this
GHC.Core.Opt instead?

We use opt pretty much everywhere else in GHC already.

Cheers
Andreas


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: stage2 build fails

2020-01-27 Thread Sylvain Henry
Which stage 0 compiler are you using? It seems to be <= 8.10 and still 
has 8038cbd96f4 merged which seems contradictory.


Anyway the alternative seems to be redundant from the beginning and 
should have been removed IMO. I have opened 
https://gitlab.haskell.org/ghc/ghc/merge_requests/2564 to fix this. Does 
it work after applying this patch?


Sylvain


On 27/01/2020 12:42, Simon Peyton Jones via ghc-devs wrote:


It would be good to know how to fix this.  It’s blocking my builds.

For some reason it doesn’t seem to kill CI

Simon

*From:*Simon Peyton Jones
*Sent:* 25 January 2020 20:26
*To:* ghc-devs 
*Subject:* stage2 build fails

I’m getting this with “sh validate –legacy”

compiler/main/DynFlags.hs:1344:15: error: [-Woverlapping-patterns, 
-Werror=overlapping-patterns]


    Pattern match is redundant

    In an equation for ‘settings’: settings s | otherwise = ...

 |

1344 | | otherwise = panic $ "Invalid cfg parameters." ++ 
exampleString


 |   ^

This is when compiling the stage-2 compiler.  There’s an ifdef in 
DynFlags thus


#if __GLASGOW_HASKELL__ <= 810

    | otherwise = panic $ "Invalid cfg parameters." ++ 
exampleString


#endif

but somehow it’s not triggering for the stage2 compiler.

Any ideas?  It’s blocking a full build.

This #ifdef was added in 8038cbd96f4, when GHC became better at 
reporting redundant code.


Simon


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Marge bot review link

2020-01-22 Thread Sylvain Henry
It seems that we just have to add `add-part-of: true` in marge bot 
config file according to https://github.com/smarkets/marge-bot


Cheers
Sylvain


On 12/01/2020 10:10, Ben Gamari wrote:
It likely is possible. However, I have been a bit reluctant to touch 
Marge since it is supposed to be a temporary measure and changes have 
historically resulted in regressions. I do hope that merge train 
support will finally be usable in the next release of GitLab.


Cheers,

- Ben

On January 11, 2020 9:07:40 AM EST, loneti...@gmail.com wrote:

Hi Ben,

I’m wondering if it’s possible to get marge to amend the commit
message before it merges it to include links to the review requests.

I really miss that phab feature..

Thanks,

Tamar


--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Question about negative Integers

2019-11-16 Thread Sylvain Henry
Alright. Thanks everyone for the convincing answers. I will keep the 
current behavior and I will document that operations may be slower than 
could be expected.


Cheers,
Sylvain

On 16/11/2019 12:04, Joachim Breitner wrote:

Hi,

Am Freitag, den 15.11.2019, 17:04 +0100 schrieb Sylvain Henry:

However integer-gmp and
integer-simple fake two's complement encoding for Bits operations.

just a small factoid: the Coq standard library provide the same
semantics. I’d lean towards leaving it as it is. If someone need the
“other” semantics, they can easily throw in a (very efficient) `abs` in
the right places.

Cheers,
Joachim


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Question about negative Integers

2019-11-15 Thread Sylvain Henry

Hi GHC devs,

As some of you may know, I am working on fixing several longstanding 
issues with GHC's big numbers implementation (Integer, Natural). You can 
read more about it here: 
https://gitlab.haskell.org/hsyl20/ghc/raw/hsyl20-integer/libraries/ghc-bignum/docs/ghc-bignum.rst


To summarize, we would have a single `ghc-bignum` package with different 
backends (GMP, pure Haskell, etc.). The backend is chosen with a Cabal 
flag and new backends are way easier to add. All the backends use the 
same representation which allows Integer and Natural types and datacons 
to be wired-in which has a lot of nice consequences (remove some 
dependency hacks in base package, make GHC agnostic of the backend used, 
etc.).


A major roadblock in previous attempts was that integer-simple doesn't 
use the same representations for numbers as integer-gmp. But I have 
written a new pure Haskell implementation which happens to be faster 
than integer-simple (see perf results in the document linked above) and 
that uses the common representation (similar to what was used in 
integer-gmp).


I am very close to submit a merge request but there is a remaining 
question about the Bits instance for negative Integer numbers:


We don't store big negative Integer using two's complement encoding, 
instead we use signed magnitude representation (i.e. we use constructors 
to distinguish between (big) positive or negative numbers). It's already 
true today in integer-simple and integer-gmp. However integer-gmp and 
integer-simple fake two's complement encoding for Bits operations. As a 
consequence, every Bits operation on negative Integers does *a lot* of 
stuff. E.g. testing a single bit with `testBit` is linear in the size of 
the number, a logical `and` between two numbers involves additions and 
subtractions, etc.


Question is: do we need/want to keep this behavior? There is nothing in 
the report that says that Integer's Bits instance has to mimic two's 
complement encoding. What's the point of slowly accessing a fake 
representation instead of the actual one? Could we deprecate this? The 
instance isn't even coherent: popCount returns the negated numbers of 1s 
in the absolute value as it can't return an infinite value.


Thanks,
Sylvain
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How to navigate around the source tree?

2019-10-23 Thread Sylvain Henry

With `--fully-qualified` fast-tags also generates qualified tags:

```
GHC.Hs    GHC/Hs.hs    21;"    m
GHC.Hs.Binds    GHC/Hs/Binds.hs    20;"    m
GHC.Hs.Binds.ABE    GHC/Hs/Binds.hs    349;"    C
...
```

If your code editor can search for qualified tags, I guess it should 
work. There is a script for Vim 
(https://github.com/elaforge/fast-tags/blob/master/tools/qualified_tag.py) 
for example.


Sylvain


On 23/10/2019 15:26, Matthew Pickering wrote:

I use `fast-tags` which doesn't look at the hierarchy at all and I'm
not sure what the improvement would be as the names of the modules
would still clash.

If there is some other recommended way to jump to a module then that
would also work for me.

Matt


On Wed, Oct 23, 2019 at 12:08 PM Sylvain Henry  wrote:

Hi,

How do you generate your tags file? It seems to be a shortcoming of the
generator to not take into account the location of the definition file.

  > Perhaps `HsUtils` and `StgUtils` would be appropriate to
disambiguate`Hs/Utils` and `StgToCmm/Utils`.

We are promoting the module prefixes (`Hs`, `Stg`, `Tc`, etc.) into
proper module layers (e.g. `HsUtils` becomes `GHC.Hs.Utils`) so it would
be redundant to add the prefixes back. :/

Cheers,
Sylvain

On 23/10/2019 12:52, Matthew Pickering wrote:

Hi,

The module rework has broken my workflow.

Now my tags file is useless for jumping for modules as there are
multiple "Utils" and "Types" modules. Invariable I am jumping to the
wrong one. What do other people do to avoid this?

Can we either revert these changes or give these modules unique names
to facilitate that only reliable way of navigating the code base.
Perhaps `HsUtils` and `StgUtils` would be appropriate to disambiguate
`Hs/Utils` and `StgToCmm/Utils`.

Cheers,

Matt
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: How to navigate around the source tree?

2019-10-23 Thread Sylvain Henry

Hi,

How do you generate your tags file? It seems to be a shortcoming of the 
generator to not take into account the location of the definition file.


> Perhaps `HsUtils` and `StgUtils` would be appropriate to 
disambiguate`Hs/Utils` and `StgToCmm/Utils`.


We are promoting the module prefixes (`Hs`, `Stg`, `Tc`, etc.) into 
proper module layers (e.g. `HsUtils` becomes `GHC.Hs.Utils`) so it would 
be redundant to add the prefixes back. :/


Cheers,
Sylvain

On 23/10/2019 12:52, Matthew Pickering wrote:

Hi,

The module rework has broken my workflow.

Now my tags file is useless for jumping for modules as there are
multiple "Utils" and "Types" modules. Invariable I am jumping to the
wrong one. What do other people do to avoid this?

Can we either revert these changes or give these modules unique names
to facilitate that only reliable way of navigating the code base.
Perhaps `HsUtils` and `StgUtils` would be appropriate to disambiguate
`Hs/Utils` and `StgToCmm/Utils`.

Cheers,

Matt
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ByteArray# as a foreign import argument?

2019-10-11 Thread Sylvain Henry

Or better 98668305453ea1158c97c8a2c1a90c108aa3585a (2001):

From the commit message:

    - finally, remove the last vestiges of ByteArray and MutableByteArray
  from the core libraries.  Deprecated implementations will be 
available

  in the lang compatibility package.


On 11/10/2019 10:32, Sylvain Henry wrote:
> But I can't find such a ByteArray type definition in today's common 
packages. What's the rationale for this piece of code?


Doing some archeology they seem to have been removed from 
ghc/lib/std/PrelArr.lhs in e921b2e307532e0f30eefa88b11a124be592bde4 
(1999):


 data Ix ix => Array ix elt        = Array        ix ix (Array# elt)
-data Ix ix => ByteArray ix      = ByteArray ix ix ByteArray#
 data Ix ix => MutableArray s ix elt = MutableArray ix ix 
(MutableArray# s elt)
-data Ix ix => MutableByteArray s ix = MutableByteArray ix ix 
(MutableByteArray# s)


So it's probably dead code since then.

Cheers,
Sylvain


On 10/10/2019 21:15, Shao, Cheng wrote:

Hello devs,

I've been trying to figure out how to pass lifted types as foreign
types, then encountered the following code in the `DsCCall` module
(https://gitlab.haskell.org/ghc/ghc/blob/master/compiler/deSugar/DsCCall.hs#L172): 



```
   -- Byte-arrays, both mutable and otherwise; hack warning
   -- We're looking for values of type ByteArray, MutableByteArray
   --    data ByteArray  ix = ByteArray    ix ix ByteArray#
   --    data MutableByteArray s ix = MutableByteArray ix ix
(MutableByteArray# s)
   | is_product_type &&
 data_con_arity == 3 &&
 isJust maybe_arg3_tycon &&
 (arg3_tycon ==  byteArrayPrimTyCon ||
  arg3_tycon ==  mutableByteArrayPrimTyCon)
   = do case_bndr <- newSysLocalDs arg_ty
    vars@[_l_var, _r_var, arr_cts_var] <- newSysLocalsDs 
data_con_arg_tys

    return (Var arr_cts_var,
    \ body -> Case arg case_bndr (exprType body) [(DataAlt
data_con,vars,body)]
   )
```

It seems we allow a "ByteArray" type as a foreign import argument, if
the third field of the datacon is a ByteArray# or MutableByteArray#.
But I can't find such a ByteArray type definition in today's common
packages. What's the rationale for this piece of code?

Cheers,
Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: ByteArray# as a foreign import argument?

2019-10-11 Thread Sylvain Henry
> But I can't find such a ByteArray type definition in today's common 
packages. What's the rationale for this piece of code?


Doing some archeology they seem to have been removed from 
ghc/lib/std/PrelArr.lhs in e921b2e307532e0f30eefa88b11a124be592bde4 (1999):


 data Ix ix => Array ix elt        = Array        ix ix (Array# elt)
-data Ix ix => ByteArray ix      = ByteArray ix ix ByteArray#
 data Ix ix => MutableArray s ix elt = MutableArray ix ix 
(MutableArray# s elt)
-data Ix ix => MutableByteArray s ix = MutableByteArray ix ix 
(MutableByteArray# s)


So it's probably dead code since then.

Cheers,
Sylvain


On 10/10/2019 21:15, Shao, Cheng wrote:

Hello devs,

I've been trying to figure out how to pass lifted types as foreign
types, then encountered the following code in the `DsCCall` module
(https://gitlab.haskell.org/ghc/ghc/blob/master/compiler/deSugar/DsCCall.hs#L172):

```
   -- Byte-arrays, both mutable and otherwise; hack warning
   -- We're looking for values of type ByteArray, MutableByteArray
   --data ByteArray  ix = ByteArrayix ix ByteArray#
   --data MutableByteArray s ix = MutableByteArray ix ix
(MutableByteArray# s)
   | is_product_type &&
 data_con_arity == 3 &&
 isJust maybe_arg3_tycon &&
 (arg3_tycon ==  byteArrayPrimTyCon ||
  arg3_tycon ==  mutableByteArrayPrimTyCon)
   = do case_bndr <- newSysLocalDs arg_ty
vars@[_l_var, _r_var, arr_cts_var] <- newSysLocalsDs data_con_arg_tys
return (Var arr_cts_var,
\ body -> Case arg case_bndr (exprType body) [(DataAlt
data_con,vars,body)]
   )
```

It seems we allow a "ByteArray" type as a foreign import argument, if
the third field of the datacon is a ByteArray# or MutableByteArray#.
But I can't find such a ByteArray type definition in today's common
packages. What's the rationale for this piece of code?

Cheers,
Cheng
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC's module hierarchy

2019-10-02 Thread Sylvain Henry

Hi all,

We are back at considering an overhaul of the module structure of GHC. 
Ticket #13009  [1] is 
the place where the discussion takes place: this is a call for 
participation to this discussion!


Thanks,
Sylvain

PS: this work was supposed to be step 1 of a larger effort to make GHC 
more modular. See the wiki page 
 
[2] for more details and don't hesitate to give some feedback.


[1] https://gitlab.haskell.org/ghc/ghc/issues/13009
[2] https://gitlab.haskell.org/ghc/ghc/wikis/Make-GHC-codebase-more-modular


On 15/06/2017 09:41, Simon Peyton Jones via ghc-devs wrote:


Dear ghc-devs

hsyl20 proposes a radical overhaul of the module structure of GHC 
itself.  He or she suggested it six months ago in


https://ghc.haskell.org/trac/ghc/ticket/13009 



and has now offered a monster patch

https://phabricator.haskell.org/D3647 



It’s clearly the result of a lot of work, but I was the only one who 
responded on the original ticket, and it’ll affect all of your lives 
in a very immediate way.


So, would you like to

·consider the idea

·look at the actual re-mapping of modules hsyl20 proposals

·express an opinion about whether to go ahead

Probably the ticket, rather than Phab, is the best place to comment on 
the general idea.


I’d like to thank hsyl20.  GHC’s rather flat module structure has 
grown incrementally over years.


But still, there are pros and cons.

Simon


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC 8.10.1 Release Plan

2019-09-23 Thread Sylvain Henry

Hi Ben,

I think we should complete 
https://gitlab.haskell.org/ghc/ghc/issues/13009 for 8.10 (to avoid a 
release with inconsistent module hierarchy and to make backporting easier).


Cheers,
Sylvain

On 18/09/2019 22:07, Ben Gamari wrote:

tl;dr. If you have unmerged work that you would like to be in GHC 8.10 please
reply to this email and submit it for review in the next couple
of weeks.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Any ways to test a GHC build against large set of packages (including test suites)?

2019-07-25 Thread Sylvain Henry

Hi,

I've never used stackage-curator but "curator 2.0" [1] seems to generate 
a stack.yaml file that can be used by Stack to build all the packages of 
the selected snapshot.


As Stack supports installing GHC bindists and Stack 2.0 even supports 
building and installing GHC from a GIT repository [2], you should just 
have to edit the generated stack.yaml file to use another compiler.


Cheers,
Sylvain

[1] https://github.com/commercialhaskell/curator
[2] 
https://docs.haskellstack.org/en/stable/yaml_configuration/#building-ghc-from-source-experimental


On 25/07/2019 11:23, Gert-Jan Bottu wrote:

Hi,

I'm trying to do something similar : I'm hacking around with GHC, and 
would like to build a large set of packages to verify my changes. 
Similarly to the steps described below, I've followed the scheduled 
build in .circle/config.yml, but I can't figure out how to force it to 
use my own (hacked upon) GHC build?


More concretely, the steps I took (from the lastest .circle/config.yml):
- Installed my local GHC to ~/ghc-head
- Installed stackage-build-plan, stackage-curator and stackage-head 
from git repos

- export BUILD_PLAN=nightly-2018-10-23
- curl 
https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json 
--output metadata.json
- curl 
https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml 
--output $BUILD_PLAN.yaml

- fix-build-plan $BUILD_PLAN.yaml custom-source-urls.yaml
- stackage-curator make-bundle --allow-newer --jobs 9 --plan-file 
$BUILD_PLAN.yaml --docmap-file docmap-file.yaml --target $BUILD_PLAN 
--skip-haddock --skip-hoogle --skip-benches --no-rebuild-cabal -v > 
build.log 2>&1


This manages to build Stackage and generate a report just fine, but it 
doesn't use my ~/ghc-head GHC install. Any ideas how I can point 
stackage-curator to a specific GHC install?


Thanks

Gert-Jan

On 10.08.18 10:39, Ömer Sinan Ağacan wrote:

Hi,

This is working great, I just generated my first report. One problem 
is stm-2.4
doesn't compile with GHC HEAD, we need stm-2.5.0.0. But that's not 
published on
Hackage yet, and latest nightly still uses stm-2.4.5.0. I wonder if 
there's
anything that can be done about this. Apparently stm blocks 82 
packages (I
don't know if that's counting transitively or just packages that are 
directly

blocked by stm). Any ideas about this?

Ömer

Ömer Sinan Ağacan , 9 Ağu 2018 Per, 14:45
tarihinde şunu yazdı:
Ah, I now realize that that command is supposed to print that 
output. I'll

continue following the steps and keep you updated if I get stuck again.

Ömer

Ömer Sinan Ağacan , 9 Ağu 2018 Per, 13:20
tarihinde şunu yazdı:

Hi Manuel,

I'm trying stackage-head. I'm following the steps for the scheduled 
build in

.circleci/config.yml. So far steps I took:

- Installed ghc-head (from [1]) to ~/ghc-head
- Installed stackage-build-plan, stackage-curator and stackage-head 
(with

   -fdev) from git repos, using stack.
- export BUILD_PLAN=nightly-2018-07-30 (from config.yml)
- curl 
https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/metadata.json

--output metadata.json
- curl 
https://raw.githubusercontent.com/fpco/stackage-nightly/master/$BUILD_PLAN.yaml

--output $BUILD_PLAN.yaml

Now I'm doing

- ./.local/bin/stackage-head already-seen --target $BUILD_PLAN
--ghc-metadata metadata.json --outdir build-reports

but it's failing with

 The combination of target and commit is new to me

Any ideas what I'm doing wrong?

Thanks

[1]: 
https://ghc-artifacts.s3.amazonaws.com/nightly/validate-x86_64-linux/latest/bindist.tar.xz


Ömer

Ömer Sinan Ağacan , 7 Ağu 2018 Sal, 23:28
tarihinde şunu yazdı:
Thanks for both suggestions. I'll try both and see which one works 
better.


Ömer

Manuel M T Chakravarty , 7 Ağu 2018 Sal, 18:15
tarihinde şunu yazdı:

Hi Ömer,

This is exactly the motivation for the Stackage HEAD works that 
we have pushed at Tweag I/O in the context of the GHC DevOps 
group. Have a look at


   https://github.com/tweag/stackage-head

and also the blog post from when the first version went live:

https://www.tweag.io/posts/2018-04-17-stackage-head-is-live.html

Cheers,
Manuel

Am 06.08.2018 um 09:40 schrieb Ömer Sinan Ağacan 
:


Hi,

I'd like to test some GHC builds + some compile and runtime flag 
combinations
against a large set of packages by building them and running 
test suites. For

this I need

- A set of packages that are known to work with latest GHC
- A way to build them and run their test suites (if I could 
specify compile and

  runtime flags that'd be even better)

I think stackage can serve as (1) but I don't know how to do 
(2). Can anyone
point me to the right direction? I vaguely remember some 
nix-based solution for
this that was being discussed on the IRC channel, but can't 
recall any details.


Thanks,

Ömer
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: CI execution

2019-04-08 Thread Sylvain Henry

Yes, this is an consequence of a bug in gitlab which meant that pushes
to branches which were also MRs were built twice.


Oh ok!


If you want your commit to be built you could make a MR?


I don't like the idea of submitting a MR just to test some code. It isn't a 
merge request yet, yet I would like to check that I don't break something on 
platforms I don't have access to and check for performance regressions.


I'm not sure there is a way to manually trigger the CI pipeline. If
you really want to you could modify the .gitlab-ci.yml file on your
branch.


I've just read on [1] that we can allow this. Hence: 
https://gitlab.haskell.org/ghc/ghc/merge_requests/730

Cheers,
Sylvain


[1] https://docs.gitlab.com/ee/ci/yaml/#using-your-own-runners

 



On 08/04/2019 15:57, Matthew Pickering wrote:

Yes, this is an consequence of a bug in gitlab which meant that pushes
to branches which were also MRs were built twice.

I'm not sure there is a way to manually trigger the CI pipeline. If
you really want to you could modify the .gitlab-ci.yml file on your
branch.

If you want your commit to be built you could make a MR?

Cheers,

Matt


On Mon, Apr 8, 2019 at 2:22 PM Sylvain Henry  wrote:

Hi devs,

It seems that the CI doesn't check branches in GHC forks on Gitlab
anymore. Is is intentional? Is there a way to trigger a CI execution
manually on a specific branch?

Thanks,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


CI execution

2019-04-08 Thread Sylvain Henry

Hi devs,

It seems that the CI doesn't check branches in GHC forks on Gitlab 
anymore. Is is intentional? Is there a way to trigger a CI execution 
manually on a specific branch?


Thanks,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Help with cabal

2019-04-04 Thread Sylvain Henry

Hi Simon,

Hackage on haskell.org is down: 
https://www.reddit.com/r/haskell/comments/b99cef/hackage_downtime/


Sylvain


On 04/04/2019 11:51, Simon Peyton Jones via ghc-devs wrote:


I’m setting up my GHC builds on a new machine, and this time I’m using 
WSL (windows subsystem for Linux).


Small problem.  I’ve installed cabal-3.0, and say “cabal update”.  But 
I get


Unexpected response 502for http://hackage.haskell.org/timestamp.json 



And indeed, that’s what a web browser says for that same URL.

What do to?

Oddly, on another also-new Windows machine, I do exactly the same 
thing, and with ‘cabal update –v’ I can see that it gets the same 
error, but then moves on to try hackages.fpcomplete.com, which works.


Why does the behaviour differ?  And, at least for now, how can I get 
the first machine to look at hackage.fpcomplete.com?


Thanks

Simon


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Haddock tree spongled

2019-03-06 Thread Sylvain Henry
Why don't we just put Haddock into GHC's repository? It was proposed in 
a previous discussion in February [1] and it would avoid the bad 
experience of having it as a submodule while keeping it in sync.


With the following commands we can keep the whole commit history:

In Haddock repo:
> mkdir -p utils/haddock
> git rm .arcconfig .arclint .ghci .gitignore .travis.yml
> git mv -k * utils/haddock
> git commit -a -m "Prepare Haddock merge"

In GHC repo:
> git rm -rf utils/haddock
> git commit -a -m "Prepare Haddock merge"
> git remote add haddock https://gitlab.haskell.org/ghc/haddock
> git fetch haddock
> git merge --allow-unrelated-histories haddock/ghc-8.6 -m "Merge haddock"
> git remote remove haddock

[1] https://mail.haskell.org/pipermail/ghc-devs/2019-February/017120.html

Cheers,
Sylvain

On 06/03/2019 13:08, Ben Gamari wrote:

Ryan Scott  writes:


I do think something is afoot here. The current Haddock submodule commit is
at 07f2ca [1], but the ghc-head branch of Haddock is still at commit 8459c6
[2]. It would be good if someone could update the ghc-head branch
accordingly.


Indeed. Done.

It would be nice if we had a better way to handle this. Ideally Marge or
someone similar would land any relevant haddock patches to ghc-head when
landing a GHC MR.

Cheers,

- Ben


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell-cafe] Final steps in GHC's Trac-to-GitLab migration

2019-03-06 Thread Sylvain Henry

I use it to track tickets and I would also like to see it continued.

Sylvain

On 06/03/2019 12:33, Ara Adkins wrote:

Personally I would like to see it continued, but it may not be worth the work 
if I’m in a minority here.

A potential stopgap would be to ‘watch’ the GHC project on our gitlab instance, 
but I can’t see any way to decide to get emails for notifications rather than 
having to check in at GitLab all the time.

_ara


On 6 Mar 2019, at 11:21, Ben Gamari  wrote:




On March 6, 2019 6:11:49 AM EST, Ara Adkins  wrote:
Super excited for this! Thank you to everyone whose put in so much hard
work to get it done!

One question: what is happening with the trac tickets mailing list? I
imagine it’ll be going away, but for those of us that use it to keep
track of things is there a recommended alternative?


The ghc-commits list will continue to work.

The ghc-tickets list is a good question. I suspect that under gitlab there will 
be less need for this list but we may still want to continue maintaining it 
regardless for continuity's sake. Thoughts?

Cheers,

- Ben




Best,
_ara


On 6 Mar 2019, at 01:21, Ben Gamari  wrote:

Hi everyone,

Over the past few weeks we have been hard at work sorting out the
last batch of issues in GHC's Trac-to-GitLab import [1]. At this

point I

believe we have sorted out the issues which are necessary to perform

the

final migration:

* We are missing only two tickets (#1436 and #2074 which will require

a

  bit of manual intervention to import due to extremely large
  description lengths)

* A variety of markup issues have been resolved

* More metadata is now preserved via labels. We may choose to
  reorganize or eliminate some of these labels in time but it's

easier

  to remove metadata after import than it is to reintroduce it. The
  logic which maps Trac metadata to GitLab labels can be found here

[2]

* We now generate a Wiki table of contents [3] which is significantly
  more readable than GitLab's default page list. This will be updated
  by a cron job until underlying GitLab pages list becomes more
  readable.

* We now generate redirects for Trac ticket and Wiki links (although
  this isn't visible in the staging instance)

* Milestones are now properly closed when closed in Trac

* Mapping between Trac and GitLab usernames is now a bit more robust

As in previous test imports, we would appreciate it if you could have

a

look over the import and let us know of any problems your encounter.

If no serious issues are identified with the staging site we plan to
proceed with the migration this coming weekend. The current migration
plan is to perform the final import on gitlab.haskell.org on

Saturday, 9

March 2019.

This will involve both gitlab.haskell.org and ghc.haskell.org being

down

for likely the entirety of the day Saturday and likely some of Sunday
(EST time zone). Read-only access will be available to
gitlab.staging.haskell.org for ticket lookup while the import is
underway.

After the import we will wait at least a week or so before we begin

the

process of decommissioning Trac, which will be kept in read-only mode
for the duration.

Do let me know if the 9 March timing is problematic.

Cheers,

- Ben


[1] https://gitlab.staging.haskell.org/ghc/ghc
[2]

https://github.com/bgamari/trac-to-remarkup/blob/master/TicketImport.hs#L227

[3] https://gitlab.staging.haskell.org/ghc/ghc/wikis/index
___
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

--
Sent from my Android device with K-9 Mail. Please excuse my brevity.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: WIP branches

2019-02-05 Thread Sylvain Henry

What is the advantage of having ghc-wip instead of having all devs just have 
their own forks?


I am all for each dev having its own fork. The ghc-wip repo would be just for 
devs having an SVN workflow (i.e. several people working with commit rights on 
the same branch/fork). If no-one uses this workflow or if Gitlab allows fine 
tuning of permissions on user forks, we may omit the ghc-wip repo altogether.

Regards,
Sylvain

PS: you didn't send your answer to the list, only to me

On 05/02/2019 19:44, Richard Eisenberg wrote:

I agree that movement in this direction would be good (though I don't feel the 
pain from the current mode -- it just seems suboptimal). What is the advantage 
of having ghc-wip instead of having all devs just have their own forks?

Thanks,
Richard


On Feb 5, 2019, at 11:36 AM, Sylvain Henry  wrote:

Hi,

Every time we fetch the main GHC repository, we get *a lot* of "wip/*" branches. That's a 
lot of noise, making the bash completion of "git checkout" pretty useless for instance:


git checkout 

zsh: do you wish to see all 945 possibilities (329 lines)?

Unless I'm missing something, they seem to be used to:
1) get the CI run on personal branches (e.g. wip/USER/whatever)
2) share code between different people (SVN like)
3) archival of not worth merging but still worth keeping code (cf 
https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches)

Now that we have switched to Gitlab, can we keep the main repository clean of 
those branches?
1) The CI is run on user forks and on merge requests in Gitlab so we don't need 
this anymore
2 and 3) Can we have a Gitlab project ("ghc-wip" or something) that isn't protected and 
dedicated to this? The main project could be protected globally instead of per-branch so that only 
Ben and Marge could create release branches, merge, etc. Devs using wip branches would only have to 
add "ghc-wip" as an additional remote repo.

Any opinion on this?

Thanks,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


WIP branches

2019-02-05 Thread Sylvain Henry

Hi,

Every time we fetch the main GHC repository, we get *a lot* of "wip/*" 
branches. That's a lot of noise, making the bash completion of "git 
checkout" pretty useless for instance:


> git checkout 
zsh: do you wish to see all 945 possibilities (329 lines)?

Unless I'm missing something, they seem to be used to:
1) get the CI run on personal branches (e.g. wip/USER/whatever)
2) share code between different people (SVN like)
3) archival of not worth merging but still worth keeping code (cf 
https://ghc.haskell.org/trac/ghc/wiki/ActiveBranches)


Now that we have switched to Gitlab, can we keep the main repository 
clean of those branches?
1) The CI is run on user forks and on merge requests in Gitlab so we 
don't need this anymore
2 and 3) Can we have a Gitlab project ("ghc-wip" or something) that 
isn't protected and dedicated to this? The main project could be 
protected globally instead of per-branch so that only Ben and Marge 
could create release branches, merge, etc. Devs using wip branches would 
only have to add "ghc-wip" as an additional remote repo.


Any opinion on this?

Thanks,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC | Some refactoring in tcInferApps (!116)

2019-01-15 Thread Sylvain Henry
> Thanks.  But none of the pictures arrived, so I can’t interpret what 
you say.


They probably have been filtered by the ML...

They can seen with the script here: 
https://gist.github.com/hsyl20/912b0a5fec9e7c621d8ac82e46b88d93




___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Fwd: [commit: ghc] master: Use https links in user-facing startup and error messages (a1c0b70)

2018-12-15 Thread Sylvain Henry

Hi Ben,

I've just noticed that when you commit a diff from phab (as below), you 
are assigned both as committer and *author* (see also here 
https://git.haskell.org/ghc.git/commitdiff/a1c0b70638949a73bbd404c11797f2edf28f5965).


Reviewers and subscribers to the phab diff are indicated in the commit 
message but not the author (here Ingo Blechschmidt "iblech"). I'm 
worried that if the Phabricator instance goes down we won't be able to 
retrieve commit authors. And also that they are not credited 
appropriately (cf git shortlog -sne).


(By the way, I've also noticed that .mailmap contents isn't up to date: 
`git shortlog -se | cut -f2 | cut -d'<' -f1 | uniq -d` isn't empty. 
Maybe you could add a check in a script somewhere to ensure that it 
stays empty when you push a commit?).


Regards,
Sylvain



 Forwarded Message 
Subject: 	[commit: ghc] master: Use https links in user-facing startup 
and error messages (a1c0b70)

Date:   Sat, 15 Dec 2018 00:49:47 + (UTC)
From:   g...@git.haskell.org
Reply-To:   ghc-devs@haskell.org
To: ghc-comm...@haskell.org



Repository : ssh://g...@git.haskell.org/ghc

On branch : master
Link : 
http://ghc.haskell.org/trac/ghc/changeset/a1c0b70638949a73bbd404c11797f2edf28f5965/ghc



---


commit a1c0b70638949a73bbd404c11797f2edf28f5965
Author: Ben Gamari 
Date: Fri Dec 14 11:10:56 2018 -0500

Use https links in user-facing startup and error messages
I consider myself lucky that in my circle of friends, `http` urls (as
opposed to `https` urls) are frowned upon in that we generally
apologize in the rase cases that we share an `http` url.
This pull request changes `http` links into their `https` analogues in
the following places:
* In the GHCI startup message (and parts of the User's Guide, where
there are verbatim transcripts of GHCi sessions).
* In a couple of error messages, asking the user to report a bug.
(I also took the liberty to change a single space before the reportabug
url into two spaces, harmonizing this occurence with the others.)
I'm not trying to start a war. I just had a moment to spare and felt
like preparing this diff. Merge or don't merge as you wish!
Reviewers: bgamari, erikd, simonmar
Subscribers: goldfire, rwbarton, carter
Differential Revision: https://phabricator.haskell.org/D5450



---


a1c0b70638949a73bbd404c11797f2edf28f5965
compiler/typecheck/TcTyClsDecls.hs | 2 +-
compiler/utils/Panic.hs | 2 +-
docs/users_guide/ghci.rst | 4 ++--
ghc/GHCi/UI.hs | 2 +-
rts/RtsMessages.c | 2 +-
5 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/compiler/typecheck/TcTyClsDecls.hs 
b/compiler/typecheck/TcTyClsDecls.hs

index cc9779a..71899a1 100644
--- a/compiler/typecheck/TcTyClsDecls.hs
+++ b/compiler/typecheck/TcTyClsDecls.hs
@@ -3609,7 +3609,7 @@ checkValidRoles tc
report_error doc
= addErrTc $ vcat [text "Internal error in role inference:",
doc,
- text "Please report this as a GHC bug: 
http://www.haskell.org/ghc/reportabug;]
+ text "Please report this as a GHC bug: 
https://www.haskell.org/ghc/reportabug;]

{-

diff --git a/compiler/utils/Panic.hs b/compiler/utils/Panic.hs
index 03f095b..4f0f3b1 100644
--- a/compiler/utils/Panic.hs
+++ b/compiler/utils/Panic.hs
@@ -168,7 +168,7 @@ showGhcException exception
showString "panic! (the 'impossible' happened)\n"
. showString (" (GHC version " ++ cProjectVersion ++ " for " ++ 
TargetPlatform_NAME ++ "):\n\t")

. s . showString "\n\n"
- . showString "Please report this as a GHC bug: 
http://www.haskell.org/ghc/reportabug\n;
+ . showString "Please report this as a GHC bug: 
https://www.haskell.org/ghc/reportabug\n;

throwGhcException :: GhcException -> a
diff --git a/docs/users_guide/ghci.rst b/docs/users_guide/ghci.rst
index 49a96ca..f468e80 100644
--- a/docs/users_guide/ghci.rst
+++ b/docs/users_guide/ghci.rst
@@ -37,7 +37,7 @@ command ``ghci``:
.. code-block:: none
$ ghci
- GHCi, version 8.y.z: http://www.haskell.org/ghc/ :? for help
+ GHCi, version 8.y.z: https://www.haskell.org/ghc/ :? for help
Prelude>
There may be a short pause while GHCi loads the prelude and standard
@@ -2052,7 +2052,7 @@ by using the :ghc-flag:`-package ⟨pkg⟩` flag:
.. code-block:: none
$ ghci -package readline
- GHCi, version 8.y.z: http://www.haskell.org/ghc/ :? for help
+ GHCi, version 8.y.z: https://www.haskell.org/ghc/ :? for help
Loading package base ... linking ... done.
Loading package readline-1.0 ... linking ... done.
Prelude>
diff --git a/ghc/GHCi/UI.hs b/ghc/GHCi/UI.hs
index ae8ba02..13275f8 100644
--- a/ghc/GHCi/UI.hs
+++ b/ghc/GHCi/UI.hs
@@ -162,7 +162,7 @@ defaultGhciSettings =
ghciWelcomeMsg :: String
ghciWelcomeMsg = "GHCi, version " ++ cProjectVersion ++
- ": http://www.haskell.org/ghc/ :? for help"
+ ": https://www.haskell.org/ghc/ :? for help"
ghciCommands :: [Command]
ghciCommands = map mkCmd [

Re: ghc-prim package-data.mk failed

2018-10-30 Thread Sylvain Henry

Hi Simon,

IIRC you have to delete "libraries/ghc-prim/configure" which is a 
left-over after d7fa8695324d6e0c3ea77228f9de93d529afc23e


Sylvain


On 26/10/2018 13:42, Simon Peyton Jones via ghc-devs wrote:


This has started happening when I do ‘sh validate –no-clean’

"inplace/bin/ghc-cabal" configure libraries/ghc-prim dist-install 
--with-ghc="/home/simonpj/5builds/HEAD-5/inplace/bin/ghc-stage1" 
--with-ghc-pkg="/home/simonpj/5builds/HEAD-5/inplace/bin/ghc-pkg" 
--disable-library-for-ghci --enable-library-vanilla 
--enable-library-for-ghci --disable-library-profiling --enable-shared 
--with-hscolour="/home/simonpj/.cabal/bin/HsColour" 
--configure-option=CFLAGS="-Wall -fno-stack-protector 
-Werror=unused-but-set-variable -Wno-error=inline" 
--configure-option=LDFLAGS="  " --configure-option=CPPFLAGS="   " 
--gcc-options="-Wall -fno-stack-protector    
-Werror=unused-but-set-variable -Wno-error=inline   " --with-gcc="gcc" 
--with-ld="ld.gold" --with-ar="ar" 
--with-alex="/home/simonpj/.cabal/bin/alex" 
--with-happy="/home/simonpj/.cabal/bin/happy"


Configuring ghc-prim-0.5.3...

configure: WARNING: unrecognized options: --with-compiler

checking for gcc... /usr/bin/gcc

checking whether the C compiler works... yes

checking for C compiler default output file name... a.out

checking for suffix of executables...

checking whether we are cross compiling... no

checking for suffix of object files... o

checking whether we are using the GNU C compiler... yes

checking whether /usr/bin/gcc accepts -g... yes

checking for /usr/bin/gcc option to accept ISO C89... none needed

checking whether GCC supports __atomic_ builtins... no

configure: creating ./config.status

config.status: error: cannot find input file: `ghc-prim.buildinfo.in'

*libraries/ghc-prim/ghc.mk:4: recipe for target 
'libraries/ghc-prim/dist-install/package-data.mk' failed*


make[1]: *** [libraries/ghc-prim/dist-install/package-data.mk] Error 1

Makefile:122: recipe for target 'all' failed

make: *** [all] Error 2

I think it is fixed by saying ‘sh validate’ (i.e. start from 
scratch).  But that is slow.


I’m not 100% certain about the circumstances under which it happens, 
but can anyone help me diagnose what is going on when it does?


Thanks

SImon


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms

2018-10-29 Thread Sylvain Henry
I've just found this related ticket: 
https://ghc.haskell.org/trac/ghc/ticket/14422



On 10/26/18 7:04 PM, Richard Eisenberg wrote:
Aha. So you're viewing complete sets as a type-directed property, 
where we can take a type and look up what complete sets of patterns of 
that type might be.


Then, when checking a pattern-match for completeness, we use the 
inferred type of the pattern, access its complete sets, and if these 
match up. (Perhaps an implementation may optimize this process.)


What I like about this approach is that it works well with GADTs, 
where, e.g., VNil is a complete set for type Vec a Zero but not for 
Vec a n.


I take back my claim of "No types!" then, as this does sound like it 
has the right properties.


For now, I don't want to get bogged down by syntax -- let's figure out 
how the idea should work first, and then we can worry about syntax.


Here's a stab at a formalization of this idea, written in metatheory, 
not Haskell:


Let C : Type -> Set of set of patterns. C will be the lookup function 
for complete sets. Suppose we have a pattern match, at type tau, 
matching against patterns Ps. Let S = C(tau). S is then a set of sets 
of patterns. The question is this: Is there a set s in S such that Ps 
is a superset of s? If yes, then the match is complete.


What do we think of this design? Of course, the challenge is in 
building C, but we'll tackle that next.


Richard

On Oct 26, 2018, at 5:20 AM, Sylvain Henry <mailto:sylv...@haskus.fr>> wrote:


Sorry I wasn't clear. I'm not an expert on the topic but it seems to 
me that there are two orthogonal concerns:


1) How does the checker retrieve COMPLETE sets.

Currently it seems to "attach" them to data type constructors (e.g. 
Maybe). If instead it retrieved them by matching types (e.g. "Maybe 
a", "a") we could write:


{-# COMPLETE LL #-}

From an implementation point of view, it seems to me that finding 
complete sets would become similar to finding (flexible, overlapping) 
class instances. Pseudo-code:


class Complete a where
   conlikes :: [ConLike]
instance Complete (Maybe a) where
   conlikes = [Nothing @a, Just @a]
instance Complete (Maybe a) where
   conlikes = [N @a, J @a]
instance Complete a where
   conlikes = [LL @a]
...


2) COMPLETE set depending on the matched type.

It is a thread hijack from me but while we are thinking about 
refactoring COMPLETE pragmas to support polymorphism, maybe we could 
support this too. The idea is to build a different set of conlikes 
depending on the matched type. Pseudo-code:


instance Complete (Variant cs) where
   conlikes = [V @c | c <- cs] -- cs is a type list

(I don't really care about the pragma syntax)

Sorry for the thread hijack!
Regards,
Sylvain


On 10/26/18 5:59 AM, Richard Eisenberg wrote:
I'm afraid I don't understand what your new syntax means. And, while 
I know it doesn't work today, what's wrong (in theory) with


{-# COMPLETE LL #-}

No types! (That's a rare thing for me to extol...)

I feel I must be missing something here.

Thanks,
Richard

On Oct 25, 2018, at 12:24 PM, Sylvain Henry <mailto:sylv...@haskus.fr>> wrote:


> In the case where all the patterns are polymorphic, a user must
> provide a type signature but we accept the definition regardless of
> the type signature they provide. 


Currently we can specify the type *constructor* in a COMPLETE pragma:

pattern J :: a -> Maybe apattern J a = Just apattern N :: Maybe 
apattern N = Nothing{-# COMPLETE N, J :: Maybe #-}



Instead if we could specify the type with its free vars, we could 
refer to them in conlike signatures:


{-# COMPLETE N, [J:: a -> Maybe a ] :: Maybe a #-}

The COMPLETE pragma for LL could be:

{-# COMPLETE [LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] 
:: a #-}


I'm borrowing the list comprehension syntax on purpose because it 
would allow to define a set of conlikes from a type-list (see my 
request [1]):


{-# COMPLETE [V :: (c :< cs) => c -> Variant cs | c <- cs ] :: 
Variant cs #-}


>To make things more formal, when the pattern-match checker
> requests a set of constructors for some data type constructor T, the
> checker returns:
>
>* The original set of data constructors for T
>* Any COMPLETE sets of type T
>
> Note the use of the phrase *type constructor*. The return type of all
> constructor-like things in a COMPLETE set must all be headed by the
> same type constructor T. Since `LL`'s return type is simply a type
> variable `a`, this simply doesn't work with the design of COMPLETE
> sets.

Could we use a mechanism similar to instance resolution (with 
FlexibleInstances) for the checker to return matching COMPLETE sets 
instead?


--Sylvain


[1]https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html
___
ghc-devs mailing list
ghc-devs@haskell.org <mailto:g

Re: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms

2018-10-26 Thread Sylvain Henry
Sorry I wasn't clear. I'm not an expert on the topic but it seems to me 
that there are two orthogonal concerns:


1) How does the checker retrieve COMPLETE sets.

Currently it seems to "attach" them to data type constructors (e.g. 
Maybe). If instead it retrieved them by matching types (e.g. "Maybe a", 
"a") we could write:


{-# COMPLETE LL #-}

From an implementation point of view, it seems to me that finding 
complete sets would become similar to finding (flexible, overlapping) 
class instances. Pseudo-code:


class Complete a where
  conlikes :: [ConLike]
instance Complete (Maybe a) where
  conlikes = [Nothing @a, Just @a]
instance Complete (Maybe a) where
  conlikes = [N @a, J @a]
instance Complete a where
  conlikes = [LL @a]
...


2) COMPLETE set depending on the matched type.

It is a thread hijack from me but while we are thinking about 
refactoring COMPLETE pragmas to support polymorphism, maybe we could 
support this too. The idea is to build a different set of conlikes 
depending on the matched type. Pseudo-code:


instance Complete (Variant cs) where
  conlikes = [V @c | c <- cs] -- cs is a type list

(I don't really care about the pragma syntax)

Sorry for the thread hijack!
Regards,
Sylvain


On 10/26/18 5:59 AM, Richard Eisenberg wrote:
I'm afraid I don't understand what your new syntax means. And, while I 
know it doesn't work today, what's wrong (in theory) with


{-# COMPLETE LL #-}

No types! (That's a rare thing for me to extol...)

I feel I must be missing something here.

Thanks,
Richard

On Oct 25, 2018, at 12:24 PM, Sylvain Henry <mailto:sylv...@haskus.fr>> wrote:


> In the case where all the patterns are polymorphic, a user must
> provide a type signature but we accept the definition regardless of
> the type signature they provide. 


Currently we can specify the type *constructor* in a COMPLETE pragma:

pattern J :: a -> Maybe apattern J a = Just apattern N :: Maybe 
apattern N = Nothing{-# COMPLETE N, J :: Maybe #-}



Instead if we could specify the type with its free vars, we could 
refer to them in conlike signatures:


{-# COMPLETE N, [J:: a -> Maybe a ] :: Maybe a #-}

The COMPLETE pragma for LL could be:

{-# COMPLETE [LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] 
:: a #-}


I'm borrowing the list comprehension syntax on purpose because it 
would allow to define a set of conlikes from a type-list (see my 
request [1]):


{-# COMPLETE [V :: (c :< cs) => c -> Variant cs | c <- cs ] :: 
Variant cs #-}


>To make things more formal, when the pattern-match checker
> requests a set of constructors for some data type constructor T, the
> checker returns:
>
>* The original set of data constructors for T
>* Any COMPLETE sets of type T
>
> Note the use of the phrase *type constructor*. The return type of all
> constructor-like things in a COMPLETE set must all be headed by the
> same type constructor T. Since `LL`'s return type is simply a type
> variable `a`, this simply doesn't work with the design of COMPLETE
> sets.

Could we use a mechanism similar to instance resolution (with 
FlexibleInstances) for the checker to return matching COMPLETE sets 
instead?


--Sylvain


[1]https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html
___
ghc-devs mailing list
ghc-devs@haskell.org <mailto:ghc-devs@haskell.org>
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Suppressing False Incomplete Pattern Matching Warnings for Polymorphic Pattern Synonyms

2018-10-25 Thread Sylvain Henry

In the case where all the patterns are polymorphic, a user must
provide a type signature but we accept the definition regardless of
the type signature they provide. 


Currently we can specify the type *constructor* in a COMPLETE pragma:

pattern J :: a -> Maybe apattern J a = Just apattern N :: Maybe apattern 
N = Nothing{-# COMPLETE N, J :: Maybe #-}



Instead if we could specify the type with its free vars, we could refer 
to them in conlike signatures:


{-# COMPLETE N, [J:: a -> Maybe a ] :: Maybe a #-}

The COMPLETE pragma for LL could be:

{-# COMPLETE [LL :: HasSrcSpan a => SrcSpan -> SrcSpanLess a -> a ] :: a 
#-}



I'm borrowing the list comprehension syntax on purpose because it would 
allow to define a set of conlikes from a type-list (see my request [1]):


{-# COMPLETE [V :: (c :< cs) => c -> Variant cs | c <- cs ] :: Variant 
cs #-}




   To make things more formal, when the pattern-match checker
requests a set of constructors for some data type constructor T, the
checker returns:

   * The original set of data constructors for T
   * Any COMPLETE sets of type T

Note the use of the phrase *type constructor*. The return type of all
constructor-like things in a COMPLETE set must all be headed by the
same type constructor T. Since `LL`'s return type is simply a type
variable `a`, this simply doesn't work with the design of COMPLETE
sets.


Could we use a mechanism similar to instance resolution (with 
FlexibleInstances) for the checker to return matching COMPLETE sets instead?



--Sylvain


[1] https://mail.haskell.org/pipermail/ghc-devs/2018-July/016053.html

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Type constraint: isOneOf

2018-07-31 Thread Sylvain Henry

Hi,

Motivating example: I have an open sum type (Variant, or V) which can be 
used to store values of different types:


x,y :: V '[String,Int]
x = V "test"
y = V @Int 10

f :: V '[String,Int] -> String
f = \case
   V s        -> "Found string: " ++ s
   V (i :: Int)   -> "Found int: " ++ show (i+10)


V is a pattern synonym defined like this (you don't need to understand 
the details):


pattern V :: forall c cs. (c :< cs) => c -> Variant cs
pattern V x <- (fromVariant -> Just x)
   where V x = toVariant x


Where the (:<) constraint checks that we are not doing anything wrong:

z :: V '[String,Int]
z = V @Float 10

Test.hs:15:5: error:
    • `Float' is not a member of '[String, Int]
    • In the expression: V @Float 10
  In an equation for ‘z’: z = V @Float 10

f :: V '[String,Int] -> String
f = \case
   ...
   V (i :: Float) -> "Found float: " ++ show i

Test.hs:20:4: error:
    • `Float' is not a member of '[String, Int]
    • In the pattern: V (i :: Float)
  In a case alternative: V (i :: Float) -> "Found float: " ++ show i


So far so good, it works well. Now the issues:

1) The case-expression in "f" is reported as non-exhaustive

2) We have to disambiguate the type of "10" in the definition of "y" and 
in the match "(V (i :: Int))"



Both of these issues are caused by the fact that even with the (c :< cs) 
constraint, GHC doesn't know/use the fact that the type "c" is really 
one of the types in "cs".


Would it make sense to add the following built-in "isOneOf" constraint:

∈ :: k -> [k] ->Constraint

pattern V :: forall c cs. (c :< cs, c ∈ cs) => c -> Variant cs

1) GHC could infer pattern completeness information when the V pattern 
is used depending on the type list "cs"


2) GHC might use the "c ∈ cs" constraint in the last resort to infer "c" 
if it remains ambiguous: try to type-check with c~c' forall c' in cs and 
if there is only one valid and unambiguous c', then infer c~c'.



Has it already been considered? I don't know how hard it would be to 
implement nor if it is sound to add something like this. 1 seems 
simpler/safer than 2, though (it is similar to a new kind of COMPLETE 
pragma), and it would already be great! Any thoughts?


Best regards,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Failure to Build

2018-07-31 Thread Sylvain Henry

When something like this fails, I usually do:

make maintainer-clean
git submodule update --init
./boot
./configure
make -j


On 31/07/2018 16:34, Ara Adkins wrote:

Hey Devs,

I'm having trouble building the head of the ghc-8.6 branch 
(`26a7f850d1`) on my Arch Linux machine. The essence of the error is 
this as follows, though the full result of make (to the point of 
error) is attached.


```
Configuring template-haskell-2.14.0.0...
ghc-cabal: Encountered missing dependencies:
ghc-boot-th ==8.6.* && ==8.7
```

The output of configure is as follows:
```
--
Configure completed successfully.

   Building GHC version  : 8.7.20180730
          Git commit id  : 26a7f850d15b91ad68d1e28d467faba00bb79144

   Build platform        : x86_64-unknown-linux
   Host platform         : x86_64-unknown-linux
   Target platform       : x86_64-unknown-linux

   Bootstrapping using   : /usr/bin/ghc
      which is version   : 8.4.3

   Using (for bootstrapping) : gcc
   Using gcc                 : gcc
      which is version       : 8.1.1
   Building a cross compiler : NO
   Unregisterised            : NO
   hs-cpp       : gcc
   hs-cpp-flags : -E -undef -traditional
   ar           : ar
   ld           : ld.gold
   nm           : nm
   libtool      : libtool
   objdump      : objdump
   ranlib       : ranlib
   windres      :
   dllwrap      :
   genlib       :
   Happy        : /home/ara/.local/bin/happy (1.19.9)
   Alex         : /usr/bin/alex (3.2.4)
   Perl         : /usr/bin/perl
   sphinx-build : /usr/bin/sphinx-build
   xelatex      : /usr/bin/xelatex

   Using LLVM tools
      clang : clang
      llc   :
      opt   :
   HsColour : /usr/bin/HsColour

   Tools to build Sphinx HTML documentation available: YES
   Tools to build Sphinx PDF documentation available: YES
--
```

Any help would be most appreciated.

Best,
_ara


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: CI builds failing

2018-04-16 Thread Sylvain Henry
It should be ok with the following revert: 
https://git.haskell.org/ghc.git/commitdiff/0e37361392a910ccbbb2719168f4e8d8272b2ae2



On 17/04/2018 02:54, David Feuer wrote:

On Monday, April 16, 2018 9:16:37 PM EDT Simon Peyton Jones wrote:

I wonder if you are compiling with 8.2.1?   It's broken.  You need 8.2.2

I'm talking about Harbormaster and CircleCI. Ben, do you know if someone 
changed the configuration of the build bots or something?
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: FFI-free NaN checks? (isDoubleNan and friends)

2018-03-06 Thread Sylvain Henry

Hi,

You can try with foreign primops, it should be faster than the FFI:

In IsDoubleNanPrim.s:

.global isDoubleNan_prim
isDoubleNan_prim:
   xor %rbx,%rbx
   ucomisd %xmm1, %xmm1
   lahf
   testb $68, %ah
   jnp .Lout
   mov $1, %rbx
.Lout:
   jmp * (%rbp)


In IsDoubleNan.hs:

{-# LANGUAGE GHCForeignImportPrim #-}
{-# LANGUAGE MagicHash #-}
{-# LANGUAGE UnliftedFFITypes #-}

module Main where

import GHC.Base

foreign import prim "isDoubleNan_prim" isDoubleNan_prim :: Double# -> Int#

isDoubleNan :: Double -> Bool
isDoubleNan (D# d#) = case isDoubleNan_prim d# of
   0# -> False
   _  -> True

main :: IO ()
main = do
   let testNaN x = putStrLn $ "Testing " ++ show x ++ ": " ++ show 
(isDoubleNan x)

   testNaN 10.3
   testNaN (0/0)

Compile with: ghc -Wall -O IsDoubleNan.hs IsDoubleNanPrim.s

I haven't benchmarked this but I would be interested to see the 
comparison with the other versions on your benchmarks!


Cheers,
Sylvain


On 05/03/2018 22:53, Mateusz Kowalczyk wrote:

Hi,

Recently at a client I was profiling some code and isDoubleNaN lit up.
We were checking a lot of doubles for NaN as that's what customer would
send in.

I went to investigate and I found that FFI is used to achieve this. I
was always under the impression that FFI costs a little. I had at the
time replaced the code with a hack with great results:

```
isNaN' :: Double -> Bool
isNaN' d = d /= d
```

While this worked and provided good speedup in my case, this fails
catastrophically if the program is compiled with -ffast-math. This is
expected. I have since reverted it. Seeking an alternative solution I
have thought about re-implementing the C code with a native Haskell
version: after all it just checks a few bits. Apparently unsafeCoerce#
and friends were a big no-no but I found
https://phabricator.haskell.org/D3358 . I have implemented the code at
the bottom of this post. Obviously it's missing endianness (compile-time
switch).

This seems to be faster for smaller `mkInput` list than Prelude.isNaN
but slower slightly on the one below. The `/=` version is the fastest
but very fragile.

My question to you all is whether implementing a version of this
function in Haskell makes sense and if not, why not? The
stgDoubleToWord64 is implemented in CMM and I don't know anything about
the costs of that.

* Is there a cheaper alternative to FFI way?
* If yes, does anyone know how to write it such that it compiles to same
code but without the call overhead? I must have failed below as it's
slower on some inputs.

Basically if a faster way exists for isNaN, something I have to do a
lot, I'd love to hear about it.

I leave you with basic code I managed to come up with. 8.4.x only.


```
{-# LANGUAGE MagicHash#-}
{-# OPTIONS_GHC -O2 -ddump-simpl -ddump-stg -ddump-to-file -ddump-asm #-}
module Main (main) where

import GHC.Float
import GHC.Prim

isNaN' :: Double -> Bool
isNaN' d = d /= d

isNaNBits :: Double -> Bool
isNaNBits (D# d) = case (bits `and#` expMask) `eqWord#` expMask of
   1# -> case bits `and#` mantissaMask of
 0## -> False
 _ -> True
   _ -> False
   where
 bits :: Word#
 bits = stgDoubleToWord64 d

 expMask, mantissaMask :: Word#
 expMask = 0x7FF0##
 mantissaMask = 0x000F##

main :: IO ()
main = sumFilter isNaN {-isNaN'-} {-isNaNBits-} (mkInput 1)
`seq` pure ()
   where
 nan :: Double
 nan = log (-1)

 mkInput :: Int -> [Double]
 mkInput n = take n $ cycle [1, nan]

 sumFilter :: (Double -> Bool) -> [Double] -> Double
 sumFilter p = Prelude.sum . Prelude.filter (not . p)
```



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Help with build ordering issue

2018-02-26 Thread Sylvain Henry



On 26/02/2018 19:19, Ben Gamari wrote:

Sylvain Henry <sylv...@haskus.fr> writes:


On 25/02/2018 21:30, Ben Gamari wrote:

Hmm, I'm afraid that's not particularly illuminating.

It would be helpful to see the output from -ddump-if-trace as this will
tell you why GHC is trying to load this interface file.

Thanks, it has been helpful. The relevant trace is:

Need decl for mkNatural
Considering whether to load GHC.Natural {- SYSTEM -}
Reading interface for base-4.11.0.0:GHC.Natural;
      reason: Need decl for mkNatural
readIFace libraries/base/dist-install/build/GHC/Natural.hi

Now I still don't know why GHC is trying to load the interface of the
module it is compiling.

Keep in mind that GHC may call upon mkNatural while typechecking even
without an import as it is known-key. The output of -ddump-tc-trace
might also help identify whether this is the case.


It must be something like this because I get the same error even when I 
reduce GHC.Natural module to:


{-# LANGUAGE NoImplicitPrelude #-}
module GHC.Natural where




I would help if there were some way I could reproduce this.

The failing patch is here: https://phabricator.haskell.org/D4212

Let's continue the discussion there to avoid spamming this list ;)

Thanks,
Sylvain
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Help with build ordering issue

2018-02-26 Thread Sylvain Henry

On 25/02/2018 21:30, Ben Gamari wrote:

Hmm, I'm afraid that's not particularly illuminating.

It would be helpful to see the output from -ddump-if-trace as this will
tell you why GHC is trying to load this interface file.


Thanks, it has been helpful. The relevant trace is:

Need decl for mkNatural
Considering whether to load GHC.Natural {- SYSTEM -}
Reading interface for base-4.11.0.0:GHC.Natural;
    reason: Need decl for mkNatural
readIFace libraries/base/dist-install/build/GHC/Natural.hi

Now I still don't know why GHC is trying to load the interface of the 
module it is compiling.

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Help with build ordering issue

2018-02-20 Thread Sylvain Henry



On 20/02/2018 03:25, Ben Gamari wrote:

Sylvain Henry <sylv...@haskus.fr> writes:


Hi,

@Bodigrim is working on a patch (https://phabricator.haskell.org/D4212)
to fix #14170.

The build fails because of interface file errors: "bad interface file"
for GHC.Natural and "failed to load interface" for GHC.Types.

I suspect it is a wired-in module build ordering issue but I haven't
been able to help fixing it. If anyone with more insights could help it
would be much appreciated!


Can you paste the full error? What module is failing to compile? Which
definition is it loading the interface for?

Cheers,

- Ben



"inplace/bin/ghc-stage1" -hisuf hi -osuf  o -hcsuf hc -static  -O0 -H64m -Wall  
-this-unit-id base-4.11.0.0 -hide-all-packages -i -ilibraries/base/. 
-ilibraries/base/dist-install/build -Ilibraries/base/dist-install/build 
-ilibraries/base/dist-install/build/./autogen 
-Ilibraries/base/dist-install/build/./autogen -Ilibraries/base/include   
-optP-DOPTIMISE_INTEGER_GCD_LCM -optP-include 
-optPlibraries/base/dist-install/build/./autogen/cabal_macros.h -package-id rts 
-package-id ghc-prim-0.5.2.0 -package-id integer-gmp-1.0.1.0 -this-unit-id base 
-XHaskell2010 -O  -no-user-package-db -rtsopts  -Wno-trustworthy-safe 
-Wno-deprecated-flags -Wnoncanonical-monad-instances  -odir 
libraries/base/dist-install/build -hidir libraries/base/dist-install/build -stubdir 
libraries/base/dist-install/build   -dynamic-too -c libraries/base/./GHC/Natural.hs -o 
libraries/base/dist-install/build/GHC/Natural.o -dyno 
libraries/base/dist-install/build/GHC/Natural.dyn_o

:1:1: error:
Bad interface file: libraries/base/dist-install/build/GHC/Natural.hi
libraries/base/dist-install/build/GHC/Natural.hi: openBinaryFile: does 
not exist (No such file or directory)


It fails in the CoreTidy pass.


I also got this one (only after a make clean IIRC):

libraries/base/GHC/Exception/Type.hs-boot:1:1: error:
Failed to load interface for ‘GHC.Types’
There are files missing in the ‘ghc-prim-0.5.2.0’ package,
try running 'ghc-pkg check'.
Use -v to see a list of the files searched for.


@hvr suggests it could could related to hs-boot files dependencies.

Cheers,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Help with build ordering issue

2018-02-19 Thread Sylvain Henry

Hi,

@Bodigrim is working on a patch (https://phabricator.haskell.org/D4212) 
to fix #14170.


The build fails because of interface file errors: "bad interface file" 
for GHC.Natural and "failed to load interface" for GHC.Types.


I suspect it is a wired-in module build ordering issue but I haven't 
been able to help fixing it. If anyone with more insights could help it 
would be much appreciated!


Thanks,
Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: pattern signatures

2018-01-10 Thread Sylvain Henry
Or maybe "pattern ascription"? "type-ascription" is implied as 
"ascription" isn't commonly used for something else (AFAIK).


Sylvain


On 08/01/2018 13:59, Simon Peyton Jones via ghc-devs wrote:


I like the idea of distinguishing “signatures” from “annotations”.

But then what is currently a “pattern signature” with extension 
-XPatternSignatures, becomes “type annotation in a pattern” or perhaps 
“pattern type-annotation” which is a bit clumsy.


Possibly “type specification” instead of “type annotation”.  Thus 
“pattern type-spec” which is snappier.


Simon

*From:*ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of 
*Spiwack, Arnaud

*Sent:* 08 January 2018 10:11
*Cc:* Joachim Breitner ; ghc-devs@haskell.org
*Subject:* Re: pattern signatures

In my eyes, signatures are something which goes with a definition.

So (a) is a pattern (synonym) signature, while (b) is merely a type 
annotation on a pattern.


On Fri, Jan 5, 2018 at 11:23 PM, Iavor Diatchki 
> wrote:


Well, as you say, "pattern signature" makes sense for both, so I
would expect to use context to disambiguate.  If I wanted to be
explicit about which one I meant, I'd use:

a) "Pattern synonym signature"

b) "Signature on a pattern"

-Iavor

On Fri, Jan 5, 2018 at 1:12 PM Joachim Breitner
> wrote:

Hi,

Am Freitag, den 05.01.2018, 13:42 -0500 schrieb Brandon Allbery:
> Further complicated by the fact that that form used to be
called a
> "pattern signature" with accompanying extension, until that was
> folded into ScopedTypeVariables extension.

which I find super confusing, because sometimes I want a
signature on a
pattern and it is counter-intuitive to me why I should not
longer use
the obviously named PatternSignatures extension but rather the
at first
glance unrelated ScopedTypeVariable extension.

But I am derailing the discussion a bit.

Cheers,
Joachim

--
Joachim Breitner
m...@joachim-breitner.de 
http://www.joachim-breitner.de/



___
ghc-devs mailing list
ghc-devs@haskell.org 
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs




___
ghc-devs mailing list
ghc-devs@haskell.org 
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs





___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: GHC HEAD now needs extra tools to build libffi?

2017-10-23 Thread Sylvain Henry
I don't know if it helps, but upgrading my ArchLinux installation 
yesterday broke my builds because of linker issues with stack. I now 
have to specify "ghc-build: nopie" in stack.yaml file. (cf 
https://github.com/commercialhaskell/stack/issues/2712)


Could your error be related to PIE too?


On 23/10/2017 14:57, Joachim Breitner wrote:

Hi,

Am Montag, den 23.10.2017, 20:49 +0800 schrieb Moritz Angermann:

I still can’t make sense of this. Is your gold a different version now as well?

It is “GNU gold (GNU Binutils 2.29.1) 1.14” now, and it seems it was
upgraded:
[2017-10-22 16:58] [ALPM] upgraded binutils (2.27-1 -> 2.29.1-1)

Here is the full build log:
https://raw.githubusercontent.com/nomeata/ghc-speed-logs/master/052ec24412e285aa34911d6187cc2227fc7d86d9.log

Joachim


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: New primitive types?

2017-08-02 Thread Sylvain Henry

Hi,

I also think we should do this but it has a lot of ramifications: 
contant folding in Core, codegen, TH, etc.


Also it will break codes that use primitive types directly, so maybe 
it's worth a ghc proposal.


Sylvain


On 01/08/2017 15:37, Michal Terepeta wrote:

Hi all,

I'm working on making it possible to pack constructor fields [1],
example:

```
data Foo = Foo {-# UNPACK #-} !Float {-# UNPACK #-} !Int32
```

should only require 4 bytes for unpacked `Float` and 4 bytes for
unpacked `Int32`, which on 64-bit arch would take just 1 word (instead
of 2 it currently does).

The diff to support packing of fields is in review [2], but to really
take advantage of it I think we need to introduce new primitive types:
- Int{8,16,32}#
- Word{8,16,32}#
along with some corresponding primops and with some other follow-up
changes like extending `PrimRep`.

Then we could use them in definitions of `Int{8,16,32}` and
`Word{8,16,32}` (they're currently just wrapping `Int#` and `Word#`).

Does that sound ok with everyone? (just making sure that this makes
sense before I invest more time into this :)

Thanks,
Michal

[1] https://ghc.haskell.org/trac/ghc/ticket/13825
[2] https://phabricator.haskell.org/D3809



___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: `dump-core` a prettier GHC core viewer

2017-01-12 Thread Sylvain Henry

Hi,

It would be awesome to have a more clever tool that helps further with 
these sorts of low level optimizations---at present I find it to be a 
rather unpleasant task and so avoid it when I can :-)
A few weeks ago I worked on a similar tool. I have just uploaded a demo: 
https://www.youtube.com/watch?v=sPu5UOYPKUw (it still needs a lot of work).


It would be great to have a better core renderer like you did at some 
point (currently it just highlights it).


Sylvain


On 13/01/2017 00:30, Iavor Diatchki wrote:

Hello,

not really, the plugin does not do anything clever---it simply walks 
over the GHC core and renders whatever it deems necessary to JSON.  
The only extra bits it does is to make the unique names globally 
unique (I thought GHC already did that, but apparently not, perhaps 
that happens during tidying?).


I was thinking of trying to do something like this across compilations 
(i.e., where you keep a history of all the files to compare how your 
changes to the source affected the core), but it hadn't occurred to me 
to try to do it for each phase. Please file a ticket, or even better 
if you have the time please feel free to hack on it.  I was just 
finding myself staring at a lot of core, and wanted something a little 
easier to read, but with all/most of the information still available.


It would be awesome to have a more clever tool that helps further with 
these sorts of low level optimizations---at present I find it to be a 
rather unpleasant task and so avoid it when I can :-)


-Iavor










On Thu, Jan 12, 2017 at 2:58 PM, Joachim Breitner 
> wrote:


Hi,

Am Donnerstag, den 12.01.2017, 14:18 -0800 schrieb Iavor Diatchki:
>
http://yav.github.io/dump-core/example-output/Galua.OpcodeInterpreter

> .html

this is amazing! It should in no way sound diminishing if I say that I
always wanted to create something like that (and I am sure I am
not the
online one who will say that :-)).

Can your tool step forward and backward between dumps from different
phases, correlating the corresponding entries?

Thanks,
Joachim

--
Joachim “nomeata” Breitner
m...@joachim-breitner.de  •
https://www.joachim-breitner.de/ 
  XMPP: nome...@joachim-breitner.de
 • OpenPGP-Key: 0xF0FBF51F
  Debian Developer: nome...@debian.org 
___
ghc-devs mailing list
ghc-devs@haskell.org 
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs





___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Trac to Phabricator (Maniphest) migration prototype

2016-12-21 Thread Sylvain Henry

Nice work!

Would it be possible to convert comment references too? For instance in 
http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T10547#182793 
"comment:21" should be a link to the label #178747


If we do the transfer, we should redirect:
https://ghc.haskell.org/trac/ghc/ticket/{NN}#comment:{CC}
to
phabricator.haskell.org/T{NN}#{tracToPhabComment(NN,CC)}

where "tracToPhabComment" function remains to be written ;-)

Thanks,
Sylvain

On 21/12/2016 11:12, Matthew Pickering wrote:

Dear devs,

I have completed writing a migration which moves tickets from trac to
phabricator. The conversion is essentially lossless. The trac
transaction history is replayed which means all events are transferred
with their original authors and timestamps. I welcome comments on the
work I have done so far, especially bugs as I have definitely not
looked at all 12000 tickets.

http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com

All the user accounts are automatically generated. If you want to see
the tracker from your perspective then send me an email or ping me on
IRC and I can set the password of the relevant account.

NOTE: This is not a decision, the existence of this prototype is to
show that the migration is feasible in a satisfactory way and to
remove hypothetical arguments from the discussion.

I must also thank Dan Palmer and Herbert who helped me along the way.
Dan was responsible for the first implementation and setting up much
of the infrastructure at the Haskell Exchange hackathon in October. We
extensively used the API bindings which Herbert had been working on.

Further information below!

Matt

=

Reasons
==

Why this change? The main argument is consolidation. Having many
different services is confusing for new and old contributors.
Phabricator has proved effective as a code review tool. It is modern
and actively developed with a powerful feature set which we currently
only use a small fraction of.

Trac is showing signs of its age. It is old and slow, users regularly
lose comments through accidently refreshing their browser. Further to
this, the integration with other services is quite poor. Commits do
not close tickets which mention them and the only link to commits is a
comment. Querying the tickets is also quite difficult, I usually
resort to using google search or my emails to find the relevant
ticket.


Why is Phabricator better?


Through learning more about Phabricator, there are many small things
that I think it does better which will improve the usability of the
issue tracker. I will list a few but I urge you to try it out.

* Commits which mention ticket numbers are currently posted as trac
comments. There is better integration in phabricator as linking to
commits has first-class support.
* Links with differentials are also more direct than the current
custom field which means you must update two places when posting a
differential.
* Fields are verified so that mispelling user names is not possible
(see #12623 where Ben mispelled his name for example)
* This is also true for projects and other fields. Inspecting these
fields on trac you will find that the formatting on each ticket is
often quite different.
* Keywords are much more useful as the set of used keywords is discoverable.
* Related tickets are much more substantial as the status of related
tickets is reflected to parent ticket.
(http://ec2-52-213-249-242.eu-west-1.compute.amazonaws.com/T7724)

Implementation


Keywords are implemented as projects. A project is a combination of a
tag which can be used with any Phabricator object, a workboard to
organise tasks and a group of people who care about the topic. Not all
keywords are migrated. Only keywords with at least 5 tickets were
added to avoid lots of useless projects. The state of keywords is
still a bit unsatisfactory but I wanted to take this chance to clean
them up.

Custom fields such as architecture and OS are replaced by *projects*
just like keywords. This has the same advantage as other projects.
Users can be subscribed to projects and receive emails when new
tickets are tagged with a project. The large majority of tickets have
very little additional metadata set. I also implemented these as
custom fields but found the the result to be less satisfactory.

Some users who have trac accounts do not have phab accounts.
Fortunately it is easy to create new user accounts for these users
which have empty passwords which can be recovered by the appropriate
email address. This means tickets can be properly attributed in the
migration.

The ticket numbers are maintained. I still advocate moving the
infrastructure tickets in order to maintain this mapping. Especially
as there has been little activity in thr the last year.

Tickets are linked to the relevant commits, differentials and other
tickets. There are 3000 dummy differentials which are used to test
that the 

OpenSearch with GHC manual

2016-11-13 Thread Sylvain Henry

Hi,

Search engines often reference old versions of the GHC user guide. For 
instance with Google and the request "ghc unboxed tuples" I get the 
manual for 7.0.3, 5.04.1 and 6.8.2 as first results. With DuckDuckGo I 
get 6.12.3 and then "latest" versions of the manual.


So I have made a custom search engine for the latest manual (using 
OpenSearch spec). You can install it from the following page: 
http://haskus.fr/ghc/index.html like any other search engine.


Sphinx supports automatic generation of OpenSearch spec: 
http://www.sphinx-doc.org/en/1.4.8/config.html#confval-html_use_opensearch

Maybe we should use this to make the search engine easier to find and use.

As a side note, would it be possible to have a nicer URI for the latest 
doc? Currently it is: 
https://downloads.haskell.org/~ghc/latest/docs/html/users_guide/index.html

Something like https://haskell.org/ghc/doc/index.html would be better IMO.

Sylvain

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: 177 unexpected test failures on a new system -- is this yet another linker issue?

2016-11-11 Thread Sylvain Henry
Ok so it seems to be a 64-bit symbol table (according to 
https://docs.oracle.com/cd/E53394_01/html/E54772/ar-3head.html):
> A 64-bit archive symbol table sets ar_name to the string “/SYM64/”, 
padded with 9 blank characters to the right."


We should skip it. I will make a patch.

Thanks for the report!

Sylvain


On 11/11/2016 22:26, Ömer Sinan Ağacan wrote:

Sylvain, I tried your patch, here's the output:


 cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
spaces/ghc-stage2" -c T5976.hs -dcore-lint -dcmm-lint
-no-user-package-db -rtsopts -fno-warn-missed-specialisations
-fshow-warning-groups -dno-debug-output -XTemplateHaskell -package
template-haskell -fexternal-interpreter -v0
 Actual stderr output differs from expected:
 --- ./th/T5976.run/T5976.stderr.normalised  2016-11-11
16:22:02.247761214 -0500
 +++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-11
16:22:02.247761214 -0500
 @@ -1,7 +1,4 @@
 -
 -T5976.hs:1:1:
 -Exception when trying to run compile-time code:
 -  bar
 -CallStack (from HasCallStack):
 -  error, called at T5976.hs:: in :Main
 -Code: error ((++) "foo " error "bar")
 +ghc-iserv.bin: internal loadArchive: invalid GNU-variant filename
`/SYM64/ ' found while reading
`/home/omer/haskell/ghc/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.5.0.0.a'
 +(GHC version 8.1.20161107 for x86_64_unknown_linux)
 +Please report this as a GHC bug:  
http://www.haskell.org/ghc/reportabug
 +ghc: ghc-iserv terminated (-6)
 *** unexpected failure for T5976(ext-interp)

 Unexpected results from:
 TEST="T5976"

2016-11-11 12:02 GMT-05:00 Ömer Sinan Ağacan <omeraga...@gmail.com>:

So I just tried validating on another system:

 > ghc git:(master) $ uname -a
 Linux linux-enrr.suse 4.1.34-33-default #1 SMP PREEMPT Thu Oct 20 08:03:29
 UTC 2016 (fe18aba) x86_64 x86_64 x86_64 GNU/Linux

 > ghc git:(master) $ gcc --version
 gcc (SUSE Linux) 4.8.5
 Copyright (C) 2015 Free Software Foundation, Inc.
 This is free software; see the source for copying conditions.  There is NO
 warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

 > ghc git:(master) $ ld --version
 GNU ld (GNU Binutils; openSUSE Leap 42.1) 2.26.1
 Copyright (C) 2015 Free Software Foundation, Inc.
 This program is free software; you may redistribute it under the terms of
 the GNU General Public License version 3 or (at your option) a later
 version.
 This program has absolutely no warranty.

It validated without any errors. So I can't reproduce it right now. I'll try
the patch sometime later today when I have the other laptop with me.

Sylvain, do you have any ideas on what difference may be causing this? I'm
pasting gcc and ld versions but I'm not sure if they're relevant at all.

2016-11-11 11:55 GMT-05:00 Sylvain Henry <sylv...@haskus.fr>:

My bad, in fact we do.

Could you try with the attached patch? It shows the failing filename in the
archive.


On 11/11/2016 17:18, Sylvain Henry wrote:

It seems like we don't bypass the special filename "/" (symbol lookup table)
in rts/Linker.c

https://en.wikipedia.org/wiki/Ar_(Unix)#System_V_.28or_GNU.29_variant


On 11/11/2016 16:49, Ömer Sinan Ağacan wrote:

Ah, sorry, that line was truncated. I posted the output here:
https://gist.githubusercontent.com/osa1/ea72655b8369099e84a67e0949adca7e/raw/9e72cbfb859cb839f1898af39a46ff0896237d15/gistfile1.txt

That line should be

+ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not found
while reading filename from
`/home/omer/haskell/ghc/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.5.0.0.a'
+(GHC version 8.1.20161107 for x86_64_unknown_linux)
+Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug


2016-11-11 0:52 GMT-05:00 Reid Barton <rwbar...@gmail.com>:

On Thu, Nov 10, 2016 at 11:12 PM, Ömer Sinan Ağacan
<omeraga...@gmail.com> wrote:

I'm trying to validate on a new system (not sure if related, but it has
gcc
6.2.1 and ld 2.27.0), and I'm having 177 unexpected failures, most
(maybe
even
all) of them are similar to this one:

 => T5976(ext-interp) 1 of 1 [0, 0, 0]
 cd "./th/T5976.run" &&  "/home/omer/haskell/ghc/inplace/test
spaces/ghc-stage2" -c T5976.hs -dcore-dno-debug-output -XTemplateHaskell
-package template-haskell -fexternal-interpreter -v0
 Actual stderr output differs from expected:
 --- ./th/T5976.run/T5976.stderr.normalised  2016-11-10
23:01:39.351997560 -0500
 +++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-10
23:01:39.351997560 -0500
 @@ -1,7 +1,4 @@
 -
 -T5976.hs:1:1:
 -Exception when trying to run compile-time code:
 -  bar
 -CallStack (from HasCallStack):
 -  error, called a

Re: 177 unexpected test failures on a new system -- is this yet another linker issue?

2016-11-11 Thread Sylvain Henry

My bad, in fact we do.

Could you try with the attached patch? It shows the failing filename in 
the archive.



On 11/11/2016 17:18, Sylvain Henry wrote:


It seems like we don't bypass the special filename "/" (symbol lookup 
table) in rts/Linker.c


https://en.wikipedia.org/wiki/Ar_(Unix)#System_V_.28or_GNU.29_variant


On 11/11/2016 16:49, Ömer Sinan Ağacan wrote:
Ah, sorry, that line was truncated. I posted the output here: 
https://gist.githubusercontent.com/osa1/ea72655b8369099e84a67e0949adca7e/raw/9e72cbfb859cb839f1898af39a46ff0896237d15/gistfile1.txt 



That line should be
+ghc-iserv.bin: internal loadArchive: GNU-variant filename offset not found 
while reading filename from 
`/home/omer/haskell/ghc/libraries/ghc-prim/dist-install/build/libHSghc-prim-0.5.0.0.a'
+(GHC version 8.1.20161107 for x86_64_unknown_linux)
+Please report this as a GHC bug:http://www.haskell.org/ghc/reportabug

2016-11-11 0:52 GMT-05:00 Reid Barton <rwbar...@gmail.com 
<mailto:rwbar...@gmail.com>>:


On Thu, Nov 10, 2016 at 11:12 PM, Ömer Sinan Ağacan
<omeraga...@gmail.com <mailto:omeraga...@gmail.com>> wrote:
> I'm trying to validate on a new system (not sure if related,
but it has gcc
> 6.2.1 and ld 2.27.0), and I'm having 177 unexpected failures,
most (maybe
> even
> all) of them are similar to this one:
>
> => T5976(ext-interp) 1 of 1 [0, 0, 0]
> cd "./th/T5976.run" && "/home/omer/haskell/ghc/inplace/test
> spaces/ghc-stage2" -c T5976.hs -dcore-dno-debug-output
-XTemplateHaskell
> -package template-haskell -fexternal-interpreter -v0
> Actual stderr output differs from expected:
> --- ./th/T5976.run/T5976.stderr.normalised 2016-11-10
> 23:01:39.351997560 -0500
> +++ ./th/T5976.run/T5976.comp.stderr.normalised 2016-11-10
> 23:01:39.351997560 -0500
> @@ -1,7 +1,4 @@
> -
> -T5976.hs:1:1:
> -Exception when trying to run compile-time code:
> -  bar
> -CallStack (from HasCallStack):
> -  error, called at T5976.hs:: in
:Main
> -Code: error ((++) "foo " error "bar")
> +ghc-iserv.bin: internal loadArchive: GNU-variant filename
offset not
> found while reading filename f

Did this line get truncated? It might help to have the rest of it.

Regards,
Reid Barton




___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs




___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


>From c3b4e28627b3e8a81d19648463280b3954bb518e Mon Sep 17 00:00:00 2001
From: Sylvain HENRY <hsy...@gmail.com>
Date: Fri, 11 Nov 2016 17:37:12 +0100
Subject: [PATCH] Show failing filename

---
 rts/Linker.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/rts/Linker.c b/rts/Linker.c
index e46fc05..3a8fd53 100644
--- a/rts/Linker.c
+++ b/rts/Linker.c
@@ -1540,7 +1540,10 @@ static HsInt loadArchive_ (pathchar *path)
 thisFileNameSize = 0;
 }
 else {
-barf("loadArchive: GNU-variant filename offset not found while reading filename from `%s'", path);
+char tmp[17];
+strncpy(tmp,fileName,16);
+tmp[16] = '\0';
+barf("loadArchive: invalid GNU-variant filename `%s' found while reading `%s'", tmp, path);
 }
 }
 /* Finally, the case where the filename field actually contains
-- 
2.10.2

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Fwd: Reducing boilerplate

2016-03-11 Thread Sylvain Henry
Hi Ben,

Thanks for your answer. No problem, I can wait.

With this proposal, we would have a really nice story about doing FFI
with GHC. I've been playing with DataKinds and other type-related
extensions for a few days (thanks to everyone involved in implementing
them!) and this extension would remove the last glitch:
https://github.com/hsyl20/ViperVM/blob/master/WritingBindings.md
Btw, the Vector part is inspired from what you did here:
https://github.com/expipiplus1/vulkan/pull/1 (thanks!)

Cheers,
Sylvain


2016-03-11 16:51 GMT+01:00 Ben Gamari <b...@well-typed.com>:
> Sylvain Henry <hsy...@gmail.com> writes:
>
>> Hi devs,
>>
>> I would like to add the support for the following automatic
>> instance-deriving extension:
>>
> Hi Sylvain,
>
> I suspect the person most qualified to answer these questions will be
> Simon who is currently in the middle of paper-writing season.
> Consequently, it may be a while until he is able to answer. That being
> said, I'm quite happy to hear that someone is thinking about these
> proposals.
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Fwd: Reducing boilerplate

2016-03-10 Thread Sylvain Henry
Hi devs,

I would like to add the support for the following automatic
instance-deriving extension:

module M where

class G a where
doG :: a -> Int

class P a where
  doP :: a -> Int
  doP _ = 10
  deriving instance G a where -- automatically derived instance
doG = doP

data X = X
instance P X -- derive G X automatically

print (doG X) -- print 10

See the forwarded mail below for the real context. This extension has
been proposed by someone before as InstanceTemplates:
https://ghc.haskell.org/trac/ghc/wiki/InstanceTemplates

I have modified the parser and the renamer accordingly to get:

class M.G a_awb where
  M.doG :: a_awb -> Int
class M.P a_ap6 where
  M.doP :: a_ap6 -> Int
  M.doP _ = 10
  instance M.G a_ap6 where
M.doG = M.doP

I am new to the compiler part of GHC, so I have a few questions before
I continue:
1a) does it make sense to store the renamed class instance declaration
in an interface file? (supposing it only uses exported methods/types;
we could check that)
1b) will it be possible to create a new instance declaration in
another module by just doing the substitution [a_ap6 -> X] in it?
(i.e. when we parse "instance P X", do we know that it means [a_ap6 ->
X] in class P (and not [a -> X])?)
2) maybe I should go a different way and store only the derived
instance methods as we store class default methods?

Any insight appreciated!

Thanks,
Sylvain


-- Forwarded message --
From: Sylvain Henry <hsy...@gmail.com>
Date: 2016-03-05 14:56 GMT+01:00
Subject: Reducing boilerplate
To: Haskell Cafe <haskell-c...@haskell.org>


Hi,

To write FFI bindings, I use c-storable-deriving [1] to automatically
derive CStorable instances for many data types (the only difference
between Storable and CStorable is that CStorable provides default
methods for types that have Generic instances):

{-# LANGUAGE DeriveGeneric #-}
{-# LANGUAGE DeriveAnyClass #-}
...
data X = X
   { fieldStatus   :: Vector 24 Word8
   , fieldPadding :: Word8
   } deriving (Generic, CStorable)

However I also need a Storable instance, hence I have to write (the
"c*" methods are provided by CStorable):

instance Storable X where
   peek= cPeek
   poke= cPoke
   alignment = cAlignment
   sizeOf  = cSizeOf

Is there a way to automatically generate this instance for every data
that has an instance of CStorable? Ideally, I would like to say once
and for all:

instance CStorable a => Storable a where
   peek= cPeek
   poke= cPoke
   alignment = cAlignment
   sizeOf  = cSizeOf

As I don't think it is currently possible, would it be sensible to add
the support for automatically derived instances attached to classes
(as a GHC extension)?

Regards,
Sylvain

[1] https://hackage.haskell.org/package/c-storable-deriving
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


RTS linker refactoring

2015-12-08 Thread Sylvain Henry
Hi devs,

I have made a patch to refactor the RTS linker, especially to drastically
reduce its memory usage: https://phabricator.haskell.org/D1470

We need to test it on differrent OS/architectures before it can be merged.
Here is the current state:
 - Linux/x86-64: OK (Harbormaster and I)
 - Solaris/x86-64: was OK, maybe needs to be retested (@kgardas)
 - OpenBSD/x86-64: was OK, maybe needs to be tetested (@kgardas)
 - Solaris/i386: was failing with unrelated error, needs to be retested
(@kgardas)
 - Linux/PowerPC: OK (@erikd)
 - Linux/ARM: was failing with unrelated #11123 (@erikd), OK? (@bgamari)
 - Windows: ?
 - MacOS: ?
 - ia64: ?

I don't have access to Windows and Mac OS boxes so I don't even know if it
compiles there. Could someone test it (validate) on these OSes and report
any issue they encounter to me (by mail or on phabricator)?

Do we support ia64 architecture?

Thanks!
Sylvain
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Re: [commit: ghc] ghc-7.10: base: fix #10298 #7695 (25b8478)

2015-05-29 Thread Sylvain Henry
It depends on your locale, this explains the different behavior on your
local machine.

The build machine seems to be using an ASCII locale and GHC wants to print
some Unicode characters (the tick and back-tick surrounding types in error
messages). Before this patch, Iconv was used to do the conversion from
Unicode to ASCII. It seems that it replaces the Unicode ticks with ASCII
ticks (i.e. 0xE28098 in UTF-8 into 0x60 and 0xE28099 into 0x27).

If you enter:
cd ghc/testsuite/tests
grep -rn match expected type **/*.stderr

You can see that some .stderr have been generated with a ASCII locale and
some others with a UTF-8 locale by looking at the ticks. Are the tests
expecting ASCII output (e.g. driver/T2507) passing on platforms with UTF-8
locales? The output is not equal to the expected one except if it is
converted to ASCII before the comparison.

With this patch, we don't use Iconv to convert from Unicode to ASCII
because it may not be available in some contexts (docker containers,
initramdisk, etc.). Instead we use the UTF-8 encoder to encode ASCII (ASCII
characters are encoded in the same way in ASCII and in UTF-8) and we don't
try to match Unicode only characters to ASCII ones.

Solutions:
1) change our patch to use our method only when Iconv cannot be used.
2) implement the Unicode to ASCII conversion as performed by Iconv
3) change the locale to a UTF-8 one on the build machine ;-)

Sylvain




2015-05-29 3:23 GMT+02:00 Edward Z. Yang ezy...@mit.edu:

 This commit broke the HM builds: https://phabricator.haskell.org/B4147

 When I validate locally, though, it works fine.

 Edward

 Excerpts from git's message of 2015-05-28 18:11:07 -0700:
  Repository : ssh://g...@git.haskell.org/ghc
 
  On branch  : ghc-7.10
  Link   :
 http://ghc.haskell.org/trac/ghc/changeset/25b84781ed950d59c7bffb77a576d3c43a883ca9/ghc
 
  ---
 
  commit 25b84781ed950d59c7bffb77a576d3c43a883ca9
  Author: Austin Seipp aus...@well-typed.com
  Date:   Tue May 19 04:56:40 2015 -0500
 
  base: fix #10298  #7695
 
  Summary:
  This applies a patch from Reid Barton and Sylvain Henry, which fix a
  disasterous infinite loop when iconv fails to load locale files, as
  specified in #10298.
 
  The fix is a bit of a hack but should be fine - for the actual
 reasoning
  behind it, see `Note [Disaster and iconv]` for more info.
 
  In addition to this fix, we also patch up the IO Encoding utilities
 to
  recognize several variations of the 'ASCII' encoding (including its
  aliases) directly so that GHC can do conversions without iconv. This
  allows a static binary to sit in an initramfs.
 
  Authored-by: Reid Barton rwbar...@gmail.com
  Authored-by: Sylvain Henry hsy...@gmail.com
  Signed-off-by: Austin Seipp aus...@well-typed.com
 
  Test Plan: Eyeballed it.
 
  Reviewers: rwbarton, hvr
 
  Subscribers: bgamari, thomie
 
  Differential Revision: https://phabricator.haskell.org/D898
 
  GHC Trac Issues: #10298, #7695
 
  (cherry picked from commit e28462de700240288519a016d0fe44d4360d9ffd)
 
  ---
 
  25b84781ed950d59c7bffb77a576d3c43a883ca9
   libraries/base/GHC/IO/Encoding.hs | 14 +-
   libraries/base/GHC/TopHandler.hs  | 29 -
   2 files changed, 41 insertions(+), 2 deletions(-)
 
  diff --git a/libraries/base/GHC/IO/Encoding.hs
 b/libraries/base/GHC/IO/Encoding.hs
  index 31683b4..014b61b 100644
  --- a/libraries/base/GHC/IO/Encoding.hs
  +++ b/libraries/base/GHC/IO/Encoding.hs
  @@ -235,7 +235,14 @@ mkTextEncoding e = case mb_coding_failure_mode of
   _ - Nothing
 
   mkTextEncoding' :: CodingFailureMode - String - IO TextEncoding
  -mkTextEncoding' cfm enc = case [toUpper c | c - enc, c /= '-'] of
  +mkTextEncoding' cfm enc
  +  -- First, specifically match on ASCII encodings directly using
  +  -- several possible aliases (specified by RFC 1345  co), which
  +  -- allows us to handle ASCII conversions without iconv at all (see
  +  -- trac #10298).
  +  | any (== enc) ansiEncNames = return (UTF8.mkUTF8 cfm)
  +  -- Otherwise, handle other encoding needs via iconv.
  +  | otherwise = case [toUpper c | c - enc, c /= '-'] of
   UTF8- return $ UTF8.mkUTF8 cfm
   UTF16   - return $ UTF16.mkUTF16 cfm
   UTF16LE - return $ UTF16.mkUTF16le cfm
  @@ -249,6 +256,11 @@ mkTextEncoding' cfm enc = case [toUpper c | c -
 enc, c /= '-'] of
   #else
   _ - Iconv.mkIconvEncoding cfm enc
   #endif
  +  where
  +ansiEncNames = -- ASCII aliases
  +  [ ANSI_X3.4-1968, iso-ir-6, ANSI_X3.4-1986,
 ISO_646.irv:1991
  +  , US-ASCII, us, IBM367, cp367, csASCII, ASCII,
 ISO646-US
  +  ]
 
   latin1_encode :: CharBuffer - Buffer Word8 - IO (CharBuffer, Buffer
 Word8)
   latin1_encode input output = fmap (\(_why,input',output') -
 (input',output')) $ Latin1.latin1_encode

Re: [Haskell-cafe] Anonymous FFI calls

2015-02-13 Thread Sylvain Henry
Hi,

The FFI pages on the wiki are not really in a good shape in my opinion
(especially for newcomers).

I have started a fresh one here:
https://wiki.haskell.org/Foreign_Function_Interface_(FFI)

This is just the first draft. I will improve it, probably split it in
several pages and merge information from other pages, especially pages
linked on https://wiki.haskell.org/FFI

Sylvain

2015-02-12 10:02 GMT+01:00 Simon Peyton Jones simo...@microsoft.com:

 | Thanks to everyone who replied!
 |
 | It seems like that through a combination of facilities like `libffi'
 | and `addTopDecls' I can do everything that I wanted to do.

 Great.  But please, please, do write up what you learned on the FFI wiki
 page
 https://wiki.haskell.org/GHC/Using_the_FFI

 Simon

 | -Original Message-
 | From: Francesco Mazzoli [mailto:f...@mazzo.li]
 | Sent: 12 February 2015 09:00
 | To: Simon Peyton Jones
 | Cc: Michael Sloan; Manuel Chakravarty; Geoffrey Mainland
 | (mainl...@cs.drexel.edu); ghc-devs@haskell.org; haskell
 | Subject: Re: [Haskell-cafe] Anonymous FFI calls
 |
 | Thanks to everyone who replied!
 |
 | It seems like that through a combination of facilities like `libffi'
 | and `addTopDecls' I can do everything that I wanted to do.
 |
 | I still want to take a shot at implementing anonymous FFI calls, since
 | IMHO I think they are a very small but useful addition to the
 | language.
 |
 | Francesco
 |
 | On 12 February 2015 at 09:29, Simon Peyton Jones simo...@microsoft.com
 | wrote:
 |  |  Also, I meant to say that addTopDecls is only exported by
 |  |  Language.Haskell.TH.Syntax.  While this is a digression, there are
 | a
 |  |  few other handy functions that are oddly left out of
 |  |  Language.Haskell.TH: addDependentFile, addModFinalizer, and
 | possibly
 |  |  more.
 | 
 |  That does seem wrong.  Do make a patch!
 | 
 |  SIMon
 | 
 |  |
 |  |  -Michael
 |  |
 |  |  On Wed, Feb 11, 2015 at 3:25 PM, Simon Peyton Jones
 |  |  simo...@microsoft.com wrote:
 |  |   I would LOVE someone to improve the documentation for addTopDecls.
 |  |  Manuel Chakravarty and Geoff Mainland were responsible for the
 |  |  implementation.
 |  |  
 |  |   Simon
 |  |  
 |  |   | -Original Message-
 |  |   | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf
 | Of
 |  |   | Michael Sloan
 |  |   | Sent: 11 February 2015 23:19
 |  |   | To: Francesco Mazzoli
 |  |   | Cc: ghc-devs@haskell.org; haskell
 |  |   | Subject: Re: [Haskell-cafe] Anonymous FFI calls
 |  |   |
 |  |   | It seems like addTopDecls[1] will able to help here.
 |  |  Unfortunately,
 |  |   | the function is not well documented and not very discoverable
 |  |   | because it's only exported by Language.Haskell.TH.
 |  |   |
 |  |   | The documentation doesn't mention that it can only be used to
 |  |  create
 |  |   | new top level functions and FFI imports[2].  I think that adding
 |  |  FFI
 |  |   | imports was the main motivation for implementing it.   In the
 | past
 |  |   | I've wanted to generate instances via this function, but
 |  |   | unfortunately it's not implemented..
 |  |   |
 |  |   | Hope that helps!
 |  |   | -Michael
 |  |   |
 |  |   | [1] http://hackage.haskell.org/package/template-haskell-
 |  |   | 2.9.0.0/docs/Language-Haskell-TH-Syntax.html#v:addTopDecls
 |  |   |
 |  |   | [2]
 |  |   |
 |  |
 https://github.com/ghc/ghc/blob/1d982ba10f590828b78eba992e73315dee33
 |  |   | f78a/
 |  |   | compiler/typecheck/TcSplice.hs#L818
 |  |   |
 |  |   | On Wed, Feb 11, 2015 at 2:26 AM, Francesco Mazzoli f...@mazzo.li
 |  |  wrote:
 |  |   |  Hi,
 |  |   | 
 |  |   |  I am in a situation where it would be very useful to call C
 |  |   |  functions without an explicit FFI import.  For example, I'd
 | like
 |  |   |  to be able to do
 |  |   | 
 |  |   |  (foreign import ccall cadd :: CInt - CInt - CInt) 1 2
 |  |   | 
 |  |   |  instead of declaring the foreign import explicitely at the top
 |  |  level.
 |  |   | 
 |  |   |  Is there a way to do this or to achieve similar results in
 | some
 |  |   |  other way?
 |  |   | 
 |  |   |  If not, I imagine it would be easy to implement such a
 | facility
 |  |  in
 |  |   |  GHC, given that the code implementing calling to C functions
 |  |  must
 |  |   |  already be present to implement proper FFI imports.  I think
 |  |   |  such an addition would be useful in many cases.
 |  |   | 
 |  |   |  Thanks,
 |  |   |  Francesco
 |  |   |  ___
 |  |   |  Haskell-Cafe mailing list
 |  |   |  haskell-c...@haskell.org
 |  |   |  http://www.haskell.org/mailman/listinfo/haskell-cafe
 |  |   | ___
 |  |   | ghc-devs mailing list
 |  |   | ghc-devs@haskell.org
 |  |   | http://www.haskell.org/mailman/listinfo/ghc-devs
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


Re: [Haskell-cafe] Anonymous FFI calls

2015-02-11 Thread Sylvain Henry
2015-02-11 16:18 GMT+01:00 Francesco Mazzoli f...@mazzo.li:


 Relatedly, if I have some function pointer known at runtime that
 addresses a C function that takes some arguments, I have no way to
 invoke it from Haskell, since all FFI imports must be declared at
 compile-time, and I don't know the address of the symbol I want to
 execute at compile time.


You can use FunPtr and wrapper imports to convert a FunPtr into a Haskell
function.
https://hackage.haskell.org/package/base-4.7.0.2/docs/Foreign-Ptr.html#g:2

You may be interested in my dynamic-linker-template package [1] to avoid
having to write boilerplate wrappers. For now it only works with dynamic
linking from System.Posix.DynamicLinker, but it could be easily extended to
support other platforms. It automatically generates wrappers for all the
functions in a record as well as the code to load symbol addresses and to
convert them into Haskell functions (examples [2,3]).

Sylvain

[1] https://hackage.haskell.org/package/dynamic-linker-template
[2]
https://github.com/hsyl20/dynamic-linker-template/blob/master/Tests/Test.hs
[3]
https://github.com/hsyl20/ViperVM/blob/master/src/lib/ViperVM/Arch/OpenCL/Library.hs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs