Re: [core libraries] RE: Alex and new Bool primops

2013-09-11 Thread Jan Stolarek
Also got email error for my last message, so I'm reposting it below with some 
more information. From a technical point of view changes to Alex and Happy are 
simple and already implemented on my branch:

https://github.com/jstolarek/alex/commit/32d4244894ae2702d965415483d0479d91570049
https://github.com/jstolarek/happy/commit/430ebe1f82744818cc488cd45fee0effaf5d4b47

I say we bite the bullet - this moment is as good as any other. There is one 
important thing to point out. After updating Alex and Happy you will need to 
update HEAD and rebase your older branches on top of HEAD containing new 
primops. Again, this is unavoidable no matter when we decide to make the 
change. Given that my changes affect 7 libraries (and only one of them is a 
submodule) you would need to rebase anyway.

Janek

- Oryginalna wiadomość -
Od: Michael Snoyman mich...@snoyman.com
Do: Simon Peyton-Jones simo...@microsoft.com
DW: core-libraries-commit...@haskell.org, ghc-devs ghc-devs@haskell.org, 
Geoffrey Mainland mainl...@apeiron.net, Jan Stolarek 
jan.stola...@p.lodz.pl, Simon Marlow marlo...@gmail.com
Wysłane: środa, 11 wrzesień 2013 5:04:24
Temat: Re: [core libraries] RE: Alex and new Bool primops

I got a mail delivery error on this, so I'm going to resend. Apologies to
those who get this twice.


On Tue, Sep 10, 2013 at 6:37 PM, Michael Snoyman mich...@snoyman.comwrote:

 Having a requirement to manually install a newer Alex doesn't seem too
 onerous to me. That would be my recommendation.


 On Tue, Sep 10, 2013 at 11:53 AM, Simon Peyton-Jones 
 simo...@microsoft.com wrote:

  (Simon M: are you ok with updating Alex?  You were the one of those who
 argued strongly for using the old names for the new primops.)

 ** **

 The difficulty is this.  

 **· **Alex generates Haskell code, by transforming Foo.x into
 Foo.hs

 **· **The generated Foo.hs contains references to comparison
 primops, say (#) :: Int# - Int# - Bool

 **· **Therefore Foo.hs will not work with GHC 7.8 if we have
 changed the type of (#), which is what I think we have agreed to do.

 **· **The solution is to make Alex generates a Foo.hs that is
 compilable either with GHC 7.8 or 7.6, by including enough CPP directives.
 Alex already does this for compatibility with earlier GHCs

 **· **However, until there is a new version of Alex, you simply
 won’t be able to bootstrap GHC 7.8 (or indeed the current HEAD).

 ** **

 That’s all there is to it.  It’s tiresome and trivial in a sense, but
 it’s a choice we have to make.

 ** **

 It might be perfectly reasonable to say 

 **· **You can’t build GHC 7.8 from source with the Haskell
 Platform until a new HP comes out with the new Alex (which will be soon).
 

 **· **Unless you install the new Alex manually

 ** **

 This seems not too bad; people who build GHC from source are generally
 pretty savvy.  The choice between the two is what we seek your guidance on.
 

 ** **

 (Incidentally, a very similar situation has arisen for Happy: see
 http://ghc.haskell.org/trac/ghc/ticket/8022.  But there the cost of
 perpetuating the status quo for another release cycle seems minimal.)

 ** **

 Simon

 ** **

 *From:* michael.snoy...@gmail.com [mailto:michael.snoy...@gmail.com] *On
 Behalf Of *Michael Snoyman
 *Sent:* 10 September 2013 05:28
 *To:* Simon Peyton-Jones
 *Cc:* core-libraries-commit...@haskell.org; ghc-devs; Geoffrey Mainland;
 Jan Stolarek
 *Subject:* Re: [core libraries] RE: Alex and new Bool primops

 ** **

 I'll admit a fair amount of ignorance of the GHC build process. But
 wouldn't it be standard that any tool used in the GHC build process itself,
 and built by GHC itself, would need to have some conditional compilation in
 place to handle API changes? It seems like the questions here are whether
 we should ever allow breaking changes in the API, and in this case whether
 the changes are coming too late in the development cycle. It seems like
 we've agreed on the first count that it's beneficial to allow breaking API
 changes. It could be that in this case we're too late in the dev cycle.**
 **

 ** **

 In this case, it sounds like including the compatibility module in Alex
 would be most expedient, but I'd defer to those who understand the process
 better than me.

 ** **

 On Mon, Sep 9, 2013 at 5:38 PM, Simon Peyton-Jones simo...@microsoft.com
 wrote:

  Dear Core Libraries Committee

 I think we need your advice.

 This thread (mostly on ghc-devs) shows that if the shim-package and
 boolean-primop decision goes as currently proposed, then we'll need a new
 release of Alex
  * Either to generate an import of GHC.Exts.Compat
(ie depend on the shim package)
  * Or to make its own local tiny shim module for the primops it uses
  * Or maybe some other plan
 (Alex already has quite a bit of stuff designed to make it generate code
 that will be compilable with a variety 

RE: extending GHC plugins with Hooks

2013-09-11 Thread Simon Peyton-Jones
OK, that's fine.  Thanks!

Simon

From: Nicolas Frisby [mailto:nicolas.fri...@gmail.com]
Sent: 10 September 2013 19:18
To: Simon Peyton-Jones
Cc: Luite Stegeman; Edsko de Vries; Thomas Schilling; ghc-devs
Subject: Re: extending GHC plugins with Hooks

My patch was extremely simple, so I'm asking for forgiveness instead of 
permission!

https://github.com/ghc/ghc/commit/850490af1df426b306d898381a358a35425d16c7

The commit note includes a brief explanation of the benefits.

The motivation originates with the HERMIT project at Univ. of Kansas: we'd like 
to help the user generate new top-level declarations in a module (eg a new 
datatype). Re-using the type-checker seems the simplest path towards robustness 
and feature completeness and this patch removes a simple but onerous obstacle.

Is this OK?
On Thu, Aug 22, 2013 at 11:13 AM, Simon Peyton-Jones 
simo...@microsoft.commailto:simo...@microsoft.com wrote:
Luite, Edsko, Thomas, Nicolas

You have all variously proposed improvements to the GHC API and/or the plug-in 
mechanism.  I have been so swamped in the last few months that I have not had a 
chance to look carefully at your proposals, nor how they relate to each other.

We are now only three weeks away from wanting to do a feature freeze on GHC 
7.8, and there are a lot of other things that we want to complete
http://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8
(Mostly they have gestating for some time.)

So I'm hoping you'll be ok with not putting these plugin-related changes into 
7.8.  I have the sense that they'd benefit from more discussion among the folk 
interested in plugins.  Perhaps some of the ideas could be combined nicely; I 
don't know.  And the people who are going to write plugins are also probably up 
for building HEAD anyhow.

(Exception: Luite, I think you have some fairly narrow, specific changes that 
would help GHCJS, and I'm probably fine with those if you care to send patches.)

Please say if you think there's a really strong reason for putting stuff in the 
7.8.

Thanks

Simon

From: ghc-devs 
[mailto:ghc-devs-boun...@haskell.orgmailto:ghc-devs-boun...@haskell.org] On 
Behalf Of Luite Stegeman
Sent: 21 August 2013 03:51
To: ghc-devs
Subject: extending GHC plugins with Hooks

hi all,

Sorry for taking so long to get back with this. I'm proposing a somewhat 
general way for adding 'hooks' to the GHC API, where users can override parts 
of the default compiling pipeline.

Hooks are simply functions or actions that replace existing compiler 
functionality. This means that usually only one application can use a specific 
hook at a time.

The obvious data structure to store the hooks is DynFlags. Unfortunately 
defining hooks in DynFlags directly would give birth to the mother of all 
import cycles, and it would also break the split-dll scheme on Windows. So 
here's the idea:

- Define each hook in the module where it's exported
- For each hook make a 'phantom' DataType and an instance for the Hook type 
familiy
- Add a TypeRep based map in DynFlags [0]
- For each hooked function, check for existence of a hook in DynFlags, 
otherwise run the default. Example: 
https://github.com/ghcjs/ghcjs-build/blob/master/refs/patches/ghc-ghcjs.patch#L83

Now this approach does have some disadvantages:
- No clear integration with existing plugins (I've tried adding an onLoadPlugin 
field to Plugin, where the Plugin could update DynFlags when it's loaded, but 
it was a bit messy, and plugins would not be loaded in time for some hooks, 
particularly what Edsko needs)
- More of GHC depends on type families
- Decentralized hooks definitions feel a bit messy

So I'm open to suggestions for improvements (or replacements) of this scheme. I 
have some time the coming weeks to clean up or change the patch.

We've been testing some hooks with GHCJS for a while, and so far they seem to 
provide what we need (but I'm going to doublecheck the coming weeks that we 
don't have missing functionality):

- Customizations for linking JavaScript code with our own library locations [1]
- Hooking into the DriverPipeline so we can use the compilation manager [2]
- Desugaring customizations to remove some C-isms from the FFI code [3]
- Typechecking foreign import javascript imports [4]
- Override the built-in GHC.Prim so we can customize primop types [5]

I think it's easy to add those for Edsko and Thomas as well.

luite

[0] 
https://github.com/ghcjs/ghcjs-build/blob/master/refs/patches/ghc-ghcjs.patch#L239
[1] https://github.com/ghcjs/ghcjs/blob/master/src/Compiler/GhcjsHooks.hs#L44
[2] https://github.com/ghcjs/ghcjs/blob/master/src/Compiler/GhcjsHooks.hs#L192
https://github.com/ghcjs/ghcjs-build/blob/master/refs/patches/ghc-ghcjs.patch#L335
[3] https://github.com/ghcjs/ghcjs/blob/master/src/Gen2/Foreign.hs#L67
https://github.com/ghcjs/ghcjs-build/blob/master/refs/patches/ghc-ghcjs.patch#L83
[4] https://github.com/ghcjs/ghcjs/blob/master/src/Gen2/Foreign.hs#L68

Bit-rotting(?) HUGS-specific code in GHC boot libraries

2013-09-11 Thread Herbert Valerio Riedel
Hello GHC devs,

...as the topic came up in #ghc, what's the current rationale for keeping
HUGS-specific code sprinkled throughout GHC boot libraries?

I quick tally in GHC's source tree via

  find -type f -iname '*.*hs' | xargs grep '#if.*HUGS' | cut -f1-3 -d/ | uniq -c

results in

  1 ./libraries/directory
  5 ./libraries/haskell98
 84 ./libraries/base
  5 ./libraries/haskell2010
 29 ./libraries/array
 12 ./libraries/process
  1 ./libraries/bytestring

Does anyone actually still use/test those packages in HUGS? Is there any
real benefit to keep the HUGS-specific code as dead (compile-time) code
in those packages? (When) can that code be removed?

Cheers,
  hvr
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Bit-rotting(?) HUGS-specific code in GHC boot libraries

2013-09-11 Thread Stephen Paul Weber

Somebody claiming to be Herbert Valerio Riedel wrote:

Does anyone actually still use/test those packages in HUGS?


I know lots of people still using HUGS.


(When) can that code be removed?


When enough features become standard that it becomes unnecessary ;)

--
Stephen Paul Weber, @singpolyma
See http://singpolyma.net for how I prefer to be contacted
edition right joseph


signature.asc
Description: Digital signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC 7.8 release status

2013-09-11 Thread Edsko de Vries
Hi all,

So I managed to remove 3 out of 4 of the -boot files. The one that
remains, ironically, is the DsMonad.hs-boot. DsMonad has a
(transitive) dependency on Hooks in at least two ways: once through
Finder, which imports Packages, which imports Hooks; but that's easily
solved, because Finder can import PackageState instead. However, it is
less obvious to me how to resolve the following import cycle

- DsMonad imports tcIfaceGlobal from TcIface
- TcIface imports (loadWiredInHomeIface, loadInterface, loadDecls,
findAndReadIface) from LoadIface
- LoadIFace imports Hooks

(There might be still others, this is the most direct one at the moment.)

(Just to be clear, Hooks imports DsMonad because it needs the DsM type
for the dsForeignsHook.)

I'm sure this cycle can be broken somehow, but I'm not familiar enough
with this part of the compiler to see if there is a natural point to
do it. As things stand, we have a DsMonad.hs-boot which just exports
the DsGblEnv,  DsLclEnv, and DsM types. I don't know if this is
something we should be worrying about or not?

Just to summarize: the hooks patch as things stand now introduces the
Hooks enumeration, rather than a separate type per hook so that we
have a central and type checked list of all hooks; in order to do
that, it moves some things around (some times moves to HscTypes),
introduces a new module called PipelineMonad as per SPJ's suggestion,
and introduces a single additional boot file for the DsMonad module.

Edsko

On Tue, Sep 10, 2013 at 12:40 PM, Simon Peyton-Jones
simo...@microsoft.com wrote:
 I do like the single record.



 I would really really like a strong clear Note [blah] on the hooks::Dynamic
 field of DynFlags. It’s *so* non-obvious why it’s dynamic, and the reason is
 a really bad one, namely the windows DLL split nonsense.  (Not our fault but
 still needs very clear signposting.)



 I don’t understand why we need 4 new hs-boot files.  Eg why DsMonad.hs-boot?
 It should be safely below Hooks.



 Linker.hs-boot is solely because of LibrarySpec.  It would be possible to
 push that into HscTypes.  (Again with a comment to explain why.)



 DriverPipeline is aleady 2,100 lines long, and could reasonably be split
 with CompPipeline in the PipelineMonad module, say.





 In other words, a bit of refactoring might eliminate the loops *and*
 sometimes arguably improve the code.





 I don’t feel terribly strongly about all this.  It does feel a bit ad hoc…
 in a variety of places (eg deep in Linker.hs) there are calls to hooks, and
 it’s not clear to me why exactly those are the right places. But I suppose
 they are simply driven by what has been needed.



 Anyway if you two are happy (no one else seems to mind either way) then go
 ahead.



 Simon





 From: Luite Stegeman [mailto:stege...@gmail.com]
 Sent: 10 September 2013 08:37
 To: Edsko de Vries
 Cc: Simon Peyton-Jones; ghc-devs; Edsko de Vries


 Subject: Re: GHC 7.8 release status



 Edsko has done some work of rearranging imports in DynFlags to make the DLL
 split work, and I've implemented the hooks on top of this, in a record, as
 discussed:



 -
 https://github.com/ghcjs/ghcjs-build/blob/master/refs/patches/ghc-hooks-record.patch
 (not final yet, but should be usable for testing)

 - demo program: https://gist.github.com/luite/6506064



 Some disadvantages:

 - as long as the DLL split exists, more restructuring will be required if a
 new hook is added to something in a module on which DynFlags depends

 - 4 new hs-boot files required, new hooks will often require additional
 hs-boot files (when module A has a hook (so A imports Hooks, this can't be a
 source import), the hook will often have some types defined by A, so Hooks
 will have to import A)



 Advantages (over type families / Dynamic hooks):

 - Hooks neatly defined together in a single record



 I'm not so sure myself, but if everyone agrees that this is better than the
 older hooks I'll convert GHCJS to the new implementation later today and
 finalize the patch (comments are a bit out of date, and I'm not 100% sure
 yet that GHCJS doesn't need another hook for TH support in certain setups)
 and update the wiki.



 luite



 On Mon, Sep 9, 2013 at 4:55 PM, Edsko de Vries edskodevr...@gmail.com
 wrote:

 Simon,

 I talked to Luite this morning and I think we can come up with a
 design that includes the enumeration we prefer, with a single use of
 Dynamic in DynFlags -- it involves splitting off a PackageState module
 from Packages so that DynFlags doesn't depend on the entirely of
 Packages anymore (which would then, transitively, mean that it depends
 on Hooks and hence on a large part of ghc), but I think that should be
 doable. I'm working on that now.

 Edsko


 On Mon, Sep 9, 2013 at 3:51 PM, Simon Peyton-Jones
 simo...@microsoft.com wrote:
 Edsko



 I’m very short of time right now. I think you understand the issues here.
 Can you do a round or two with Luite and emerge with a design that you
 both
 think is best?

RE: GHC 7.8 release status

2013-09-11 Thread Simon Peyton-Jones
I'm ok with that, thanks.

Can you put your comments below into DsMonad.hs-boot so that we don't lose the 
reasoning?  It's devilish hard to work out *why* a hs-boot file must exist, 
sometimes.

Maybe also update
http://ghc.haskell.org/trac/ghc/wiki/Commentary/ModuleStructure
which tries to document some these loops too.

Simon



| -Original Message-
| From: Edsko de Vries [mailto:edskodevr...@gmail.com]
| Sent: 11 September 2013 15:33
| To: Simon Peyton-Jones
| Cc: Luite Stegeman; ghc-devs; Edsko de Vries
| Subject: Re: GHC 7.8 release status
| 
| Hi all,
| 
| So I managed to remove 3 out of 4 of the -boot files. The one that
| remains, ironically, is the DsMonad.hs-boot. DsMonad has a
| (transitive) dependency on Hooks in at least two ways: once through
| Finder, which imports Packages, which imports Hooks; but that's easily
| solved, because Finder can import PackageState instead. However, it is
| less obvious to me how to resolve the following import cycle
| 
| - DsMonad imports tcIfaceGlobal from TcIface
| - TcIface imports (loadWiredInHomeIface, loadInterface, loadDecls,
| findAndReadIface) from LoadIface
| - LoadIFace imports Hooks
| 
| (There might be still others, this is the most direct one at the
| moment.)
| 
| (Just to be clear, Hooks imports DsMonad because it needs the DsM type
| for the dsForeignsHook.)
| 
| I'm sure this cycle can be broken somehow, but I'm not familiar enough
| with this part of the compiler to see if there is a natural point to
| do it. As things stand, we have a DsMonad.hs-boot which just exports
| the DsGblEnv,  DsLclEnv, and DsM types. I don't know if this is
| something we should be worrying about or not?
| 
| Just to summarize: the hooks patch as things stand now introduces the
| Hooks enumeration, rather than a separate type per hook so that we
| have a central and type checked list of all hooks; in order to do
| that, it moves some things around (some times moves to HscTypes),
| introduces a new module called PipelineMonad as per SPJ's suggestion,
| and introduces a single additional boot file for the DsMonad module.
| 
| Edsko
| 
| On Tue, Sep 10, 2013 at 12:40 PM, Simon Peyton-Jones
| simo...@microsoft.com wrote:
|  I do like the single record.
| 
| 
| 
|  I would really really like a strong clear Note [blah] on the
| hooks::Dynamic
|  field of DynFlags. It's *so* non-obvious why it's dynamic, and the
| reason is
|  a really bad one, namely the windows DLL split nonsense.  (Not our
| fault but
|  still needs very clear signposting.)
| 
| 
| 
|  I don't understand why we need 4 new hs-boot files.  Eg why
| DsMonad.hs-boot?
|  It should be safely below Hooks.
| 
| 
| 
|  Linker.hs-boot is solely because of LibrarySpec.  It would be possible
| to
|  push that into HscTypes.  (Again with a comment to explain why.)
| 
| 
| 
|  DriverPipeline is aleady 2,100 lines long, and could reasonably be
| split
|  with CompPipeline in the PipelineMonad module, say.
| 
| 
| 
| 
| 
|  In other words, a bit of refactoring might eliminate the loops *and*
|  sometimes arguably improve the code.
| 
| 
| 
| 
| 
|  I don't feel terribly strongly about all this.  It does feel a bit ad
| hoc...
|  in a variety of places (eg deep in Linker.hs) there are calls to
| hooks, and
|  it's not clear to me why exactly those are the right places. But I
| suppose
|  they are simply driven by what has been needed.
| 
| 
| 
|  Anyway if you two are happy (no one else seems to mind either way)
| then go
|  ahead.
| 
| 
| 
|  Simon
| 
| 
| 
| 
| 
|  From: Luite Stegeman [mailto:stege...@gmail.com]
|  Sent: 10 September 2013 08:37
|  To: Edsko de Vries
|  Cc: Simon Peyton-Jones; ghc-devs; Edsko de Vries
| 
| 
|  Subject: Re: GHC 7.8 release status
| 
| 
| 
|  Edsko has done some work of rearranging imports in DynFlags to make
| the DLL
|  split work, and I've implemented the hooks on top of this, in a
| record, as
|  discussed:
| 
| 
| 
|  -
|  https://github.com/ghcjs/ghcjs-build/blob/master/refs/patches/ghc-
| hooks-record.patch
|  (not final yet, but should be usable for testing)
| 
|  - demo program: https://gist.github.com/luite/6506064
| 
| 
| 
|  Some disadvantages:
| 
|  - as long as the DLL split exists, more restructuring will be required
| if a
|  new hook is added to something in a module on which DynFlags depends
| 
|  - 4 new hs-boot files required, new hooks will often require
| additional
|  hs-boot files (when module A has a hook (so A imports Hooks, this
| can't be a
|  source import), the hook will often have some types defined by A, so
| Hooks
|  will have to import A)
| 
| 
| 
|  Advantages (over type families / Dynamic hooks):
| 
|  - Hooks neatly defined together in a single record
| 
| 
| 
|  I'm not so sure myself, but if everyone agrees that this is better
| than the
|  older hooks I'll convert GHCJS to the new implementation later today
| and
|  finalize the patch (comments are a bit out of date, and I'm not 100%
| sure
|  yet that GHCJS doesn't need another hook 

Re: Suggestion for resolving the Cabal/GHC dependency problems

2013-09-11 Thread Duncan Coutts
On Wed, 2013-09-11 at 17:28 +0100, Duncan Coutts wrote:

 Further, if only ghc-pkg and the ghc build system depend on Cabal, then
 it is easier for Cabal to add more dependencies, since they do not have
 to be installed with ghc (due to the ghc lib depending on them). In
 particular the Cabal folks would like to use a proper parser and have
 suggested adding dependencies on parsec, mtl and transformers. If only
 ghc-pkg depends on Cabal, then these dependencies only need to be used
 at build time, and do not have to be installed (which also means they
 don't have to be kept quite so up to date).

Actually, this is not quite right. Since ghc would still ship Cabal (but
not depend on it), it would also ship its dependencies including parsec,
mtl and transformers. So they would need to be up to date and installed,
it's just that ghc itself would not depend on them.

If that's really inconvenient, it's plausible to have a minimal set
which is just the things ghc depends on, so long as what gets shipped to
users is the useful set, including Cabal.

Duncan

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Suggestion for resolving the Cabal/GHC dependency problems

2013-09-11 Thread Johan Tibell
On Wed, Sep 11, 2013 at 12:19 PM, Duncan Coutts 
duncan.cou...@googlemail.com wrote:

 Actually, this is not quite right. Since ghc would still ship Cabal (but
  not depend on it), it would also ship its dependencies including parsec,
 mtl and transformers. So they would need to be up to date and installed,
 it's just that ghc itself would not depend on them.

 If that's really inconvenient, it's plausible to have a minimal set
 which is just the things ghc depends on, so long as what gets shipped to
 users is the useful set, including Cabal.


I don't quite like how GHC's dependencies leak out to the rest of the
world. It makes it possible for us to decide what version we want to ship
in the platform of those libraries. I guess we don't have a good technical
solution to this problem though.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: delete remote branch

2013-09-11 Thread Geoffrey Mainland
Hi Herbert,

On 09/08/2013 04:43 AM, Herbert Valerio Riedel wrote:
 Hello Nicolas,

 On 2013-09-08 at 09:41:04 +0200, Nicolas Frisby wrote:
 I just merged in my -fdicts-strict work, so I was deleting the old
branch…
 but it's rejected for some reason.

 $ git push origin --delete dicts-strict
 remote: performing tab-check...
 remote: + refs/heads/dicts-strict ghc my-username DENIED by fallthru
 remote: error: hook declined to update refs/heads/dicts-strict
 To ssh://g...@git.haskell.org/ghc.git
  ! [remote rejected] dicts-strict (hook declined)
 error: failed to push some refs to 'ssh://g...@git.haskell.org/ghc.git'

 Git gurus chime in? Thanks.
 The current configuration doesn't permit risky operations, such as
 deleting branches and/or non-forward-updates (imagine someone would
 delete or rebase branches such as 'master' or 'ghc-7.6'). Moreover,
 having commits disappear causes headaches with other facilities
 (e.g. git submodules).

 Moreover, it was planned to define a Git ref namespace, where those
 operations would be allowed to everybody, something like 'wip/*' (see
 [1] for an example). Those branches could then also be made to be
 ignored by the Git email notifier, so that rebasing commits doesn't spam
 the Git commits mailing list.

 In the long-term, we should avoid cluttering the top-level branch
 namespace[2] with topic branches, and move to a more structured naming
 scheme, which leaves the top-level namespace to release branches.

 Long story short, I've deleted the 'dicts-strict' branch for you

 Cheers,
   hvr

  [1]: https://git.gnome.org/browse/glib/
  [2]: http://git.haskell.org/?p=ghc.git;a=heads

Maybe you can help me out with my workflow in light of the changes
brought about by the gitolite migration.

For the simd and new-th branches, I periodically rebase my work and push
to the main repo so others can see/review my work. These are necessarily
non-fast-forward pushes. Then, once the branches are rebased and ready
to merge, I plan to perform an empty (comment only) merge commit so that
the merges are obvious in the git history.

But it looks like I can no longer push my work to the repo if I want to
rebase. Not rebasing my branches is a terrible choice. The other options
are to either never push my branches, or to push my branches somewhere
other than to the main repo, e.g., github. Those are both also
undesirable.

Is there any chance we could get the wip namespace up and running soon?

Thanks,
Geoff

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Suggestion for resolving the Cabal/GHC dependency problems

2013-09-11 Thread Carter Schonwald
wasn't there an effort to have a mini private variant of attoparsec for the
parser combinator deps?


On Wed, Sep 11, 2013 at 4:03 PM, Johan Tibell johan.tib...@gmail.comwrote:

 On Wed, Sep 11, 2013 at 12:19 PM, Duncan Coutts 
 duncan.cou...@googlemail.com wrote:

 Actually, this is not quite right. Since ghc would still ship Cabal (but
  not depend on it), it would also ship its dependencies including parsec,
 mtl and transformers. So they would need to be up to date and installed,
 it's just that ghc itself would not depend on them.

 If that's really inconvenient, it's plausible to have a minimal set
 which is just the things ghc depends on, so long as what gets shipped to
 users is the useful set, including Cabal.


 I don't quite like how GHC's dependencies leak out to the rest of the
 world. It makes it possible for us to decide what version we want to ship
 in the platform of those libraries. I guess we don't have a good technical
 solution to this problem though.


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: llvm calling convention matters

2013-09-11 Thread Carter Schonwald
hey all,

first let me preface by saying I am in favor of breaking and
updating/modernizing the GHC ABI.

I just think that for a number of reasons, it doesn't make sense to do it
for the 7.8 release, but rather start work on it in another month or so, so
we can systematically have a better set of ABI, and keep all the code gens
are first class citizens. (also work out the type system changes need to be
able to correctly use SIMD shuffles, which are currently inexpressible
correctly with GHC's type system. Simd Shuffles are crucial for interesting
levels of SIMD performance!)

the reason I don't want to make the ABI change right now is because then
we'd have to wait until after llvm 3.4 gets released in like 6 months
before giving them another breaking change!
 (OR start baking a LLVM into GHC, which is a leap we're not 100% on,
though theres clear good reasons for why! ).

  Basically, if we make breaking changes to the ABI now (and thus have
split ABI for llvm 3.4HEAD vs earlier), and then we do fixups or more
breakage for 7.10, then when 7.10 rolls around (perhaps late next spring or
sometime in the summer, perhaps?), the only supported llvm version for 7.10
would be LLVM HEAD / 3.5 (which won't be released till some time
thereafter)! Unless we go ahead and break the 3.4 ABI to 7.10 rather than
7.8 abi (whatever that would entai, which would ).  This is assuming the ~
7-8 months between major version releases cycle that LLVM has done of late

additionally, as Johan remarked today on a pending patch of mine, having
operations only work on the llvm backend, and not on the native code gen is
pretty problematical!  see  http://ghc.haskell.org/trac/ghc/ticket/8256

tl;dr : Unless we're throwing away native code gen backend next month, we
probably want to actually not increase their capability gap / current ABI
incompatibility right before 7.8 release. I am willing to help explore
modernizing the native code gens so that they have parity with the llvm
backends. Additionally, boxing ourselves in a corner where for 7.10 the
only llvm with the right ABI will be llvm 3.5 seems totally unacceptable
from an end users / distribution package managers standpoint, and a huge
support headache for the  community.

I've had to help deal with the support headache of the xcode5 clang + ghc
issues on OS X,  A LOT,  in the past 2 months, I'm not keen on deliberately
creating similar support disasters for myself and others.

that said: I absolutely agree that we should fix up the ABI, have a clear
story for XMM, YMM, and ZMM registers, and if you've been following trac
tickets at all, you'll see theres even a type system issue in properly
handling the SIMD shuffles! i briefly sketch out the issue in
http://ghc.haskell.org/trac/ghc/ticket/8107 (last comment)

that said: i'm open to being convinced i'm wrong, and I absolutely
understand your motivations for wanting it now, but I really believe that
doing so right now will create a number of problems that are better off
evaded to begin with

cheers
-Carter



On Wed, Sep 11, 2013 at 5:49 PM, Geoffrey Mainland
mainl...@cs.drexel.eduwrote:

 Hi Carter,

 On 09/06/2013 03:24 PM, Carter Tazio Schonwald wrote:
  Hey Geoff,

  I'm leary about doing a calling convention change right before the ghc
  release (and Im happy to elaborate more on the phone some time) 1)
  I'd rather we test the patches on llvm locally ourselves before going
  upstream 2) doing that AVX change on the calling convention now, would
  make it harder to make a more systematic exploration of calling
  convention changes post 7.8 release, because we would face either
  breaking the llvm head/3.4 changes, or having to wait till the next
  llvm release cycle (3.5?!) to upstream any more systematic
  changes. (such as adding substantially more SIMD registers to the GHC
  calling convention!)
 
  I understand your likely motivation for wanting the calling convention
  landing in the 7.8 release, namely it may eke an easy 2x perf boost in
  your stream fusion libs, i just worry that the change would ultimately
  cut off our ability to do more aggressive experimentation and
  improvements (eg more simd registers!) for ghc 7.10 over the next
  year?
 
  on an unrelated note: I will spend some time this weekend given you
  the various simd operations I want / think are valuable. the low
  hanging fruit would be figuring out a good haskell type / analogue of
  the llvm __builtin_shuffle(a,b,c) primop, because that usually should
  generate decent code. I'll work out the details of this and some other
  examples and send it your way in the next few days
 
  -Carter

 Currently, on x86-64 we pass floats, doubles, and 128-bit wide SIMD
 vectors in xmm1-xmm6. I propose that we change the calling conventions
 to pass 256-bit wide SIMD vectors in ymm1-ymm6 and 512-bit wide SIMD
 vectors in zmm1-zmm6. I don't know why GHC doesn't use xmm0 or xmm7, as
 the Linux C calling convention uses xmm0-xmm7. Simon, perhaps you know
 why? I 

Re: llvm calling convention matters

2013-09-11 Thread Geoffrey Mainland
Can you provide an example of the kind of ABI change you might want for
7.10? Is it mainly using more registers to pass arguments? We're already
using 6 *mm* registers to pass arguments on x86_64. I don't know for
sure, but I would be very surprised if there is code out there that
would benefit greatly from passing more than 6 Float/Double/SIMD vector
arguments in registers.

Without understanding the ABI design space you have in mind, I can't
comment on how changing the ABI now would or would not make future
exploration more difficult.

I don't see why we should limit ourselves by insisting that the gap
between what the LLVM back-end and the native back-end not grow further.
If we want SIMD, the gap is already quite large. Yes it would be nice to
have feature parity, but there are only so many man-hours available, and
we want to invest them wisely. The SIMD primops already do not work on
the native codegen; the user gets an error telling them to use the LLVM
back-end if they use the SIMD primops with the native codegen.

I was not suggesting that we require LLVM 3.4 or later for this or any
future version of GHC. Instead, the ABI would change based on the
version of LLVM used. I think that is unavoidable at this point and not
a huge deal as it would only affect SIMD code.

All this said, I'm not going to push. Changing the ABI just creates more
work for me. I'm very motivated to get the rest of the SIMD patches into
HEAD before I present our SIMD paper at ICFP in a few weeks. However, a
year from now my priorities will likely be very different, so the ball
will be entirely in your (or someone else's, just not my!) court.

Geoff

On 09/11/2013 06:26 PM, Carter Schonwald wrote:
 hey all, 

 first let me preface by saying I am in favor of breaking and
 updating/modernizing the GHC ABI. 

 I just think that for a number of reasons, it doesn't make sense to do
 it for the 7.8 release, but rather start work on it in another month
 or so, so we can systematically have a better set of ABI, and keep all
 the code gens are first class citizens. (also work out the type system
 changes need to be able to correctly use SIMD shuffles, which are
 currently inexpressible correctly with GHC's type system. Simd
 Shuffles are crucial for interesting levels of SIMD performance!)

 the reason I don't want to make the ABI change right now is because
 then we'd have to wait until after llvm 3.4 gets released in like 6
 months before giving them another breaking change!
  (OR start baking a LLVM into GHC, which is a leap we're not 100% on,
 though theres clear good reasons for why! ).

   Basically, if we make breaking changes to the ABI now (and thus have
 split ABI for llvm 3.4HEAD vs earlier), and then we do fixups or more
 breakage for 7.10, then when 7.10 rolls around (perhaps late next
 spring or sometime in the summer, perhaps?), the only supported llvm
 version for 7.10 would be LLVM HEAD / 3.5 (which won't be released
 till some time thereafter)! Unless we go ahead and break the 3.4 ABI
 to 7.10 rather than 7.8 abi (whatever that would entai, which would ).
  This is assuming the ~ 7-8 months between major version releases
 cycle that LLVM has done of late

 additionally, as Johan remarked today on a pending patch of mine,
 having operations only work on the llvm backend, and not on the native
 code gen is pretty problematical!  see
  http://ghc.haskell.org/trac/ghc/ticket/8256 

 tl;dr : Unless we're throwing away native code gen backend next month,
 we probably want to actually not increase their capability gap /
 current ABI incompatibility right before 7.8 release. I am willing to
 help explore modernizing the native code gens so that they have parity
 with the llvm backends. Additionally, boxing ourselves in a corner
 where for 7.10 the only llvm with the right ABI will be llvm 3.5 seems
 totally unacceptable from an end users / distribution package managers
 standpoint, and a huge support headache for the  community. 

 I've had to help deal with the support headache of the xcode5 clang +
 ghc issues on OS X,  A LOT,  in the past 2 months, I'm not keen on
 deliberately creating similar support disasters for myself and others. 

 that said: I absolutely agree that we should fix up the ABI, have a
 clear story for XMM, YMM, and ZMM registers, and if you've been
 following trac tickets at all, you'll see theres even a type system
 issue in properly handling the SIMD shuffles! i briefly sketch out the
 issue in http://ghc.haskell.org/trac/ghc/ticket/8107 (last comment)

 that said: i'm open to being convinced i'm wrong, and I absolutely
 understand your motivations for wanting it now, but I really believe
 that doing so right now will create a number of problems that are
 better off evaded to begin with

 cheers
 -Carter



 On Wed, Sep 11, 2013 at 5:49 PM, Geoffrey Mainland
 mainl...@cs.drexel.edu mailto:mainl...@cs.drexel.edu wrote:

 Hi Carter,

 On 09/06/2013 03:24 PM, Carter Tazio Schonwald wrote:
  

Re: llvm calling convention matters

2013-09-11 Thread Johan Tibell
On Wed, Sep 11, 2013 at 3:59 PM, Geoffrey Mainland mainl...@apeiron.netwrote:

 I don't see why we should limit ourselves by insisting that the gap
 between what the LLVM back-end and the native back-end not grow further.
 If we want SIMD, the gap is already quite large. Yes it would be nice to
 have feature parity, but there are only so many man-hours available, and
 we want to invest them wisely. The SIMD primops already do not work on
 the native codegen; the user gets an error telling them to use the LLVM
 back-end if they use the SIMD primops with the native codegen.


Having conditional primops makes for lots of ugly #ifdefs everywhere and
everyone need to make sure they do these correctly. We don't have to
implement SIMD in the native backend, we just need to have some reasonable
emulation e.g. see how MO_PopCnt has a C fallback or how Int64 falls back
to C code.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: llvm calling convention matters

2013-09-11 Thread Geoffrey Mainland
On 09/11/2013 07:33 PM, Johan Tibell wrote:
 On Wed, Sep 11, 2013 at 3:59 PM, Geoffrey Mainland mainl...@apeiron.net 
 wrote:

 I don't see why we should limit ourselves by insisting that the gap
 between what the LLVM back-end and the native back-end not grow
further.
 If we want SIMD, the gap is already quite large. Yes it would be
nice to
 have feature parity, but there are only so many man-hours
available, and
 we want to invest them wisely. The SIMD primops already do not work on
 the native codegen; the user gets an error telling them to use the
LLVM
 back-end if they use the SIMD primops with the native codegen.


 Having conditional primops makes for lots of ugly #ifdefs everywhere
 and everyone need to make sure they do these correctly. We don't have
 to implement SIMD in the native backend, we just need to have some
 reasonable emulation e.g. see how MO_PopCnt has a C fallback or how
 Int64 falls back to C code.


Do you mean we need a reasonable emulation of the SIMD primops for the
native codegen?

Geoff

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: llvm calling convention matters

2013-09-11 Thread Johan Tibell
On Wed, Sep 11, 2013 at 4:40 PM, Geoffrey Mainland mainl...@apeiron.netwrote:

 Do you mean we need a reasonable emulation of the SIMD primops for the
 native codegen?


Yes. Reasonable in the sense that it computes the right result. I can see
that some code might still want to #ifdef (if the fallback isn't fast
enough).
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: llvm calling convention matters

2013-09-11 Thread Geoffrey Mainland
We support compiling some code with -fllvm and some not in the same
executable. Otherwise how could users of the Haskell Platform link their
-fllvm-compiled code with native-codegen-compiled libraries like base, etc.?

In other words, the LLVM and native back ends use the same calling
convention. With my SIMD work, they still use the same calling
conventions, but the native codegen can never generate code that uses
SIMD instructions.

Geoff

On 09/11/2013 10:03 PM, Johan Tibell wrote:
 OK. But that doesn't create a problem for the code we output with the
 LLVM backend, no? Or do we support compiling some code with -fllvm and
 some not in the same executable?


 On Wed, Sep 11, 2013 at 6:56 PM, Geoffrey Mainland
 mainl...@apeiron.net mailto:mainl...@apeiron.net wrote:

 We definitely have interop between the native codegen and the LLVM
 back
 end now. Otherwise anyone who wanted to use the LLVM back end
 would have
 to build GHC themselves. Interop means that users can install the
 Haskell Platform and still use -fllvm when it makes a performance
 difference.

 Geoff

 On 09/11/2013 07:59 PM, Johan Tibell wrote:
  Do nothing different than you're doing for 7.8, we can sort it out
  later. Just put a comment on the primops saying they're
 LLVM-only. See
  e.g.
 
 
 
 
 https://github.com/ghc/ghc/blob/master/compiler/prelude/primops.txt.pp#L181
 
  for an example how to add docs to primops.
 
  I don't think we need interop between the native and the LLVM
  backends. We don't have that now do we (i.e. they use different
  calling conventions).
 
 
 
  On Wed, Sep 11, 2013 at 4:51 PM, Geoffrey Mainland
  mainl...@apeiron.net mailto:mainl...@apeiron.net
 mailto:mainl...@apeiron.net mailto:mainl...@apeiron.net wrote:
 
  On 09/11/2013 07:44 PM, Johan Tibell wrote:
   On Wed, Sep 11, 2013 at 4:40 PM, Geoffrey Mainland
  mainl...@apeiron.net mailto:mainl...@apeiron.net
 mailto:mainl...@apeiron.net mailto:mainl...@apeiron.net wrote:
Do you mean we need a reasonable emulation of the SIMD
 primops for
the native codegen?
  
   Yes. Reasonable in the sense that it computes the right
 result.
  I can
   see that some code might still want to #ifdef (if the
 fallback isn't
   fast enough).
 
  Two implications of this requirement:
 
  1) There will not be SIMD in 7.8. I just don't have the
 time. In fact,
  what SIMD support is there already will have to be removed if we
  cannot
  live with LLVM-only SIMD primops.
 
  2) If we also require interop between the LLVM back-end and
 the native
  codegen, then we cannot pass any SIMD vectors in
 registers---they all
  must be passed on the stack.
 
  My plan, as discussed with Simon PJ, is to not support SIMD
 primops at
  all with the native codegen. If there is a strong feeling that
  this *is
  not* the way to go, the I need to know ASAP.
 
  Geoff
 
 
 



___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


running out of bits in lexer

2013-09-11 Thread Richard Eisenberg
Hi devs,

I'm in the process of reimplementing role annotations (#8185). I need to add a 
new pseudo-keyword 'role' to the lexer, and I'm initially tempted to guard the 
lexing of this keyword with an extension bit (like, say, tyFamBit in Lexer.x). 
But, there are already bits 0 to 31 taken, and the type of the bitmap is Int, 
so I'm out of luck.

However, I think there's a way out: the parser should do the right thing when 
it sees these pseudo-keywords, and hopefully the downstream code could check 
the enabled options. So, it would seem that having some of these bit controls 
in the lexer is unnecessary -- more a belt-and-suspenders thing than anything 
else.

So, I propose to let the lexer treat 'role' specially in all cases, relying on 
the parser to treat it like an ordinary varid except when 'role' comes right 
after 'type'.

Is there something I'm missing here? In particular, I can't think of a good 
reason not to treat the pseudo-keyword 'family' in the same way and allow it to 
lex regardless of the extensions. This might also improve error messages.

Thanks,
Richard
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs