Re: How does GHC's testsuite work?

2017-10-30 Thread Edward Z. Yang
Excerpts from Sébastien Hinderer's message of 2017-10-30 16:39:24 +0100:
> Dear Edward,
> 
> Many thanks for your prompt response!
> 
> Edward Z. Yang (2017/10/30 11:25 -0400):
> > Actually, it's the reverse of what you said: like OCaml, GHC essentially
> > has ~no unit tests; it's entirely Haskell programs which we compile
> > (and sometimes run; a lot of tests are for the typechecker only so
> > we don't bother running those.)  The .T file is just a way of letting
> > the Python driver know what tests exist.
> 
> Oh okay! Would you be able to point me to just a few tests to get an
> idea of a few typical situations, please?

For example:

The metadata

https://github.com/ghc/ghc/blob/master/testsuite/tests/typecheck/should_fail/all.T

The source file

https://github.com/ghc/ghc/blob/master/testsuite/tests/typecheck/should_fail/tcfail011.hs

The expected error output

https://github.com/ghc/ghc/blob/master/testsuite/tests/typecheck/should_fail/tcfail011.stderr

> One other question I forgot to ask: how do you deal with conditional
> tests? For instance, if a test should be run only on some platforms? Or,
> in OCaml we have tests for Fortran bindings that should be run only if a
> Fortran compiler is available. How would you deal with such tests?

All managed inside the Python driver code.

Example:
https://github.com/ghc/ghc/blob/master/testsuite/tests/rts/all.T#L32

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: How does GHC's testsuite work?

2017-10-30 Thread Edward Z. Yang
Actually, it's the reverse of what you said: like OCaml, GHC essentially
has ~no unit tests; it's entirely Haskell programs which we compile
(and sometimes run; a lot of tests are for the typechecker only so
we don't bother running those.)  The .T file is just a way of letting
the Python driver know what tests exist.

Edward

Excerpts from Sébastien Hinderer's message of 2017-10-30 16:17:38 +0100:
> Dear all,
> 
> I am a member of OCaml's developement team. More specifically, I am
> working on a test-driver for the OCaml compiler, which will be part of
> OCaml's 4.06 release.
> 
> I am currently writing an article to describe the tool and its
> principles. In this article, I would like to also talk about how other
> compilers' testsuites are handled and loking how things are done in GHC
> is natural.
> 
> In OCaml, our testsuite essentially consist in whole programs that
> we compile and run, checking that the compilation and execution results
> match the expected ones.
> 
> From what I could see from GHC's testsuite, it seemed to me that it uses
> Python to drive the tests. I also understood that the testsuite has
> tests that are more kind of unit-tests, in the .T file. Am I correct
> here? Or do you guys also have whole program tests?
> If you do, how do you compile and run them?
> 
> Any comment / hint on this aspect of the test harness' design would be
> really helpful.
> 
> Many thanks in advance,
> 
> Sébastien.
> 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: How to get a heap visualization

2017-08-30 Thread Edward Z. Yang
Why not the plain old heap profiler?

Edward

Excerpts from Yitzchak Gale's message of 2017-08-30 18:34:05 +0300:
> I need a simple heap visualization for debugging purposes.
> I'm using GHC 8.0.2 to compile a large and complex yesod-based
> web app. What's the quickest and easiest way?
> 
> Vacuum looks simple and nice. But it has some long-outstanding
> PRs against it to support GHC 7.10 and GHC 8.0 that were never
> applied.
> 
> https://github.com/thoughtpolice/vacuum/issues/9
> 
> Getting ghc-vis to compile looks hopeless, for a number of reasons.
> The dependencies on gtk and cairo are huge. It hasn't been updated
> on Hackage for a year and a half. It requires base < 4.9. I need to run
> the visualizer either on a headless Ubuntu 16.04 server, or locally on
> Windows. And anyway, the fancy GUI in ghc-vis is way overkill for me.
> 
> The heap scraper backend for ghc-vis, ghc-heap-view, looks usable,
> and better supported than vacuum. But is there a quick and simple
> visualizer for its output, without ghc-vis?
> 
> Is there anything else? Is the best option to fork vacuum and and try
> to apply the PRs?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Profiling plugins

2017-06-11 Thread Edward Z. Yang
Hello M,

Unfortunately, if you want detailed profiling, you will have to rebuild
GHC with profiling.  Note that you can basic heap profile information
without rebuilding GHC.

Edward

Excerpts from M Farkas-Dyck's message of 2017-06-06 12:34:57 -0800:
> How is this done? I am working on ConCat
> [https://github.com/conal/concat] and we need a profile of the plugin
> itself. I tried "stack test --profile" but that does a profile of the
> test program, not the plugin. Can i do this and not rebuild GHC?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Accessing the "original" names via GHC API

2017-01-25 Thread Edward Z. Yang
Hi Ranjit,

Unfortunately you need more information to do this, since the
set of modules which are available for import can vary depending
on whether or not packages are hidden or not (not even counting
whether or not a module is exposed or not!)

The way GHC's pretty printer gives a good name is that it keeps
track of all of the names in scope and where they came from
in a GlobalRdrEnv.  The relevant code is in 'mkPrintUnqualified'
in HscTypes, but if you pretty print using user-style with
an appropriately set up GlobalRdrEnv you should
get the things you want.

Edward

Excerpts from Ranjit Jhala's message of 2017-01-24 19:00:05 -0800:
> Dear Joachim,
> 
> You are right -- some more context.
> 
> Given
> 
>   tc  :: TyCon
>   m   :: ModName
>   env :: HscEnv
> 
> I want to get a
> 
>   s :: String
> 
> such that _in_ the context given by `m` and `env` I can resolve `s` to get
> back the original `TyCon`, e.g. something like
> 
>   L _ rn <- hscParseIdentifier env s
>   name   <- lookupRdrName env modName rn
> 
> would then return `name :: Name` which corresponds to the original `TyCon`.
> 
> That is, the goal is _not_ pretty printing, but "serialization" into a
> String
> representation that lets me recover the original `TyCon` later.
> 
> (Consequently, `"Data.Set.Base.Set"` doesn't work as the `Data.Set.Base`
> module is hidden and hence, when I try the above, GHC complains that the
> name is not in scope.
> 
> Does that clarify the problem?
> 
> Thanks!
> 
> - Ranjit.
> 
> 
> On Tue, Jan 24, 2017 at 6:11 PM, Joachim Breitner 
> wrote:
> 
> > Hi Ranjit,
> >
> > Am Dienstag, den 24.01.2017, 16:09 -0800 schrieb Ranjit Jhala:
> > > My goal is to write a function
> > >
> > >tyconString :: TyCon -> String
> > >
> > > (perhaps with extra parameters) such that given the
> > > `TyCon` corresponding to `Set`, I get back the "original"
> > > name `S.Set`, or even `Data.Set.Set`.
> > >
> > > Everything I've tried, which is fiddling with different variants of
> > > `PprStyle`, end up giving me `Data.Set.Base.Set`
> > >
> > > Does anyone have a suggestion for how to proceed?
> >
> > in a way, `Data.Set.Base.Set` is the “original”, proper name for Set,
> > everything else is just a local view on the name.
> >
> > So, are you maybe looking for a way to get the “most natural way” to
> > print a name in a certain module context?
> >
> > This functionality must exist somewhere, as ghci is printing out errors
> > this way. But it certainly would require an additional argument to
> > tyconString, to specify in which module to print the name.
> >
> > Greetings,
> > Joachim
> >
> >
> > --
> > Joachim “nomeata” Breitner
> >   m...@joachim-breitner.de • https://www.joachim-breitner.de/
> >   XMPP: nome...@joachim-breitner.de • OpenPGP-Key: 0xF0FBF51F
> >   Debian Developer: nome...@debian.org
> > ___
> > Glasgow-haskell-users mailing list
> > Glasgow-haskell-users@haskell.org
> > http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
> >
> >
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


FINAL CALL FOR TALKS (Aug 8 deadline): Haskell Implementors Workshop 2016, Sep 24, Nara

2016-08-01 Thread Edward Z. Yang
Deadline is in a week!  Submit your talks!

Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--- End forwarded message ---
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Call for talks: Haskell Implementors Workshop 2016, Sep 24 (FIXED), Nara

2016-06-09 Thread Edward Z. Yang
(...and now with the right date in the subject line!)

Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--

  * Joachim Breitner(Karlsruhe Institut für Technologie)
  * Duncan Coutts   (Well Typed)
  * Michael Snoyman (FP Complete)
  * Luite Stegeman  (ghcjs)
  * Niki Vazou  (UCSD)
  * Stephanie Weirich   (University of Pennsylvania) 
  * Edward Z. Yang - chair  (Stanford University)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Call for talks: Haskell Implementors Workshop 2016, Aug 24, Nara

2016-06-09 Thread Edward Z. Yang
Call for Contributions
   ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2016
Nara, Japan, 24 September, 2016

Co-located with ICFP 2016
   http://www.icfpconference.org/icfp2016/

Important dates
---
Proposal Deadline:  Monday, 8 August, 2016
Notification:   Monday, 22 August, 2016
Workshop:   Saturday, 24 September, 2016

The 8th Haskell Implementors' Workshop is to be held alongside ICFP
2016 this year in Nara. It is a forum for people involved in the
design and development of Haskell implementations, tools, libraries,
and supporting infrastructure, to share their work and discuss future
directions and collaborations with others.

Talks and/or demos are proposed by submitting an abstract, and
selected by a small program committee. There will be no published
proceedings; the workshop will be informal and interactive, with a
flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.

Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2016. The
Haskell Symposium is for the publication of Haskell-related research. In
contrast, the Haskell Implementors' Workshop will have no proceedings --
although we will aim to make talk videos, slides and presented data
available with the consent of the speakers.

In the Haskell Implementors' Workshop, we hope to study the underlying
technology. We want to bring together anyone interested in the
nitty-gritty details behind turning plain-text source code into a
deployed product. Having said that, members of the wider Haskell
community are more than welcome to attend the workshop -- we need your
feedback to keep the Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even if
it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and tools for development or deployment

Talks
-

At this stage we would like to invite proposals from potential speakers
for talks and demonstrations. We are aiming for 20 minute talks with 10
minutes for questions and changeovers. We want to hear from people
writing compilers, tools, or libraries, people with cool ideas for
directions in which we should take the platform, proposals for new
features to be implemented, and half-baked crazy ideas. Please submit a
talk title and abstract of no more than 300 words.

Submissions should be made via HotCRP. The website is:
  https://icfp-hiw16.hotcrp.com/

We will also have a lightning talks session which will be organised on
the day. These talks will be 5-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.

Organisers
--

  * Joachim Breitner(Karlsruhe Institut für Technologie)
  * Duncan Coutts   (Well Typed)
  * Michael Snoyman (FP Complete)
  * Luite Stegeman  (ghcjs)
  * Niki Vazou  (UCSD)
  * Stephanie Weirich   (University of Pennsylvania) 
  * Edward Z. Yang - chair  (Stanford University)
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: idea: tool to suggest adding imports

2016-03-18 Thread Edward Z. Yang
Hello John,

In my opinion, the big question is whether or not your Emacs extension
should know how to build your Haskell project.  Without this knowledge,
(1) and (3) are non-starters, since you have to pass the right set of
-package flags to GHC to get the process started.

If you do assume you have this knowledge, then I think writing a
little stub program using the GHC API (the best fit is the recent
frontend plugins feature:
https://downloads.haskell.org/~ghc/8.0.1-rc2/docs/html/users_guide/extending_ghc.html#frontend-plugins
because it will handle command line parsing for you; unfortunately, it
will also limit you to GHC 8 only) is your best bet.

Edward

Excerpts from John Williams's message of 2016-03-18 11:27:34 -0700:
> I have an idea for a tool I'd like to implement, and I'm looking for advice
> on the best way to do it.
> 
> Ideally, I want to write an Emacs extension where, if I'm editing Haskell
> code and I try to use a symbol that's not defined or imported, it will try
> to automatically add an appropriate import for the symbol. If instance, if
> I have "import Data.Maybe (isNothing)" in my module, and I try to call
> "isJust", the extension would automatically change the import to "import
> Data.Maybe (isJust, isNothing)".
> 
> The Emacs part is easy, but the Haskell part has me kind of lost. Basically
> I want to figure out how to heuristically resolve a name, using an existing
> set of imports as hints and constraints. The main heuristic I'd like to
> implement is that, if some symbols are imported from a module M, consider
> importing additional symbols from M. A more advanced heuristic might
> suggest that if a symbol is exported from a module M in a visible package
> P, the symbol should be imported from M. Finally, if a symbol is exported
> by a module in the Haskell platform, I'd like to suggest adding the
> relevant package as a dependency in the .cabal and/or stack.yaml file, and
> adding an import for it in the .hs file.
> 
> Here are some implementation options I'm considering:
> 
> 1. Add a ghci command to implement my heuristics directly, since ghc
> already understands modules, packages and import statements.
> 2. Load a modified version of the source file into ghci where imports like
> "import M (...)" are replaced with "import M", and parse the error messages
> about ambiguous symbols.
> 3. Write a separate tool that reads Haskell imports and duplicates ghc and
> cabal's name resolution mechanisms.
> 4. Write a tool that reads Haskell imports and suggests imports from a list
> of commonly imported symbols, ignoring which packages are actually visible.
> 
> Right now the options that look best to me are 2 and 4, because the don't
> require me to understand or duplicate big parts of ghc, but if modifying
> ghc isn't actually that hard, then maybe 1 is the way to go. Option 3 might
> be a good way to go if there are libraries I can use to do the hard work
> for me.
> 
> Any thoughts?
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Discovery of source dependencies without --make

2015-12-13 Thread Edward Z. Yang
I missed context, but if you just want the topological graph,
depanal will give you a module graph which you can then topsort
with topSortModuleGraph (all in GhcMake).  Then you can do what you want
with the result.  You will obviously need accurate targets but
frontend plugins and guessTarget will get you most of the way there.

Edward

Excerpts from Thomas Miedema's message of 2015-12-13 16:12:39 -0800:
> On Fri, Nov 28, 2014 at 3:41 PM, Lars Hupel  wrote:
> 
> > Let's say the hypothetical feature is selected via the GHC flag
> > "--topo-sort". It would add a step before regular compilation and
> > wouldn't affect any other flag:
> >
> >   ghc -c --topo-sort fileA.hs fileB.hs ...
> >
> > This would first read in the specified source files and look at their
> > module headers and import statements. It would build a graph of module
> > dependencies _between_ the specified source files (ignoring circular
> > dependencies), perform a topological sort on that graph, and proceed
> > with compiling the source files in that order.
> >
> 
> GHC 8 will have support for Frontend plugins. Frontend plugins enable you
> to write plugins to replace
> GHC major modes.
> 
> E.g. instead of saying
> 
> ghc --make A B C
> 
> you can now say:
> 
> ghc --frontend TopoSort A B C
> 
> You still have to implement TopoSort.hs yourself, using the GHC API to
> compile A B C in topological order, but some of the plumbing is taken care
> of by the Frontend plugin infrastructure already.
> 
> Take a look at this commit, especially the user's guide section and the
> test case:
> https://github.com/ghc/ghc/commit/a3c2a26b3af034f09c960b2dad38f73be7e3a655.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: type error formatting

2015-10-23 Thread Edward Z. Yang
I think this is quite a reasonable suggestion.

Edward

Excerpts from Evan Laforge's message of 2015-10-23 19:48:07 -0700:
> Here's a typical simple type error from GHC:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
> Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
> Relevant bindings include
>   syllables :: [(a1, Syllable)]
> (bound at Derive/Call/India/Pakhawaj.hs:141:16)
>   best_match :: [(a1, Syllable)]
> -> Maybe (Int, ([(a1, Syllable)], [(a1, Sequence Bol)]))
> (bound at Derive/Call/India/Pakhawaj.hs:141:5)
> In the second argument of ‘mapMaybe’, namely ‘all_bols’
> In the second argument of ‘($)’, namely
>   ‘mapMaybe (match_bols syllables) all_bols’
> 
> I've been having more trouble than usual reading GHC's errors, and I
> finally spent some time to think about it.  The problem is that this new
> "relevant bindings include" section gets in between the expected and actual
> types (I still don't like that wording but I've gotten used to it), which
> is the most critical part, and the location context, which is second most
> critical.  Notice the same effect in the previous sentence :)  After I see
> a type error the next thing I want to see is the where it happened, so I
> have to skip over the bindings, which can be long and complicated.  Then I
> usually know what to do, and only look into the bindings if something more
> complicated is going on, like wonky inference.  So how about reordering the
> message:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
> Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
> In the second argument of ‘mapMaybe’, namely ‘all_bols’
> In the second argument of ‘($)’, namely
>   ‘mapMaybe (match_bols syllables) all_bols’
> Relevant bindings include
>   syllables :: [(a1, Syllable)]
> (bound at Derive/Call/India/Pakhawaj.hs:141:16)
>   best_match :: [(a1, Syllable)]
> -> Maybe (Int, ([(a1, Syllable)], [(a1, Sequence Bol)]))
> (bound at Derive/Call/India/Pakhawaj.hs:141:5)
> 
> After this, why not go one step further and set off the various sections
> visibly to make it easier to scan.  The context section can also be really
> long if it gets an entire do block or record:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
>   * Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
>   * In the second argument of ‘mapMaybe’, namely ‘all_bols’
> In the second argument of ‘($)’, namely
>   ‘mapMaybe (match_bols syllables) all_bols’
>   * Relevant bindings include
>   syllables :: [(a1, Syllable)]
> (bound at Derive/Call/India/Pakhawaj.hs:141:16)
>   best_match :: [(a1, Syllable)]
> -> Maybe (Int, ([(a1, Syllable)], [(a1, Sequence Bol)]))
> (bound at Derive/Call/India/Pakhawaj.hs:141:5)
> 
> Or alternately, taking up a bit more vertical space:
> 
> Derive/Call/India/Pakhawaj.hs:142:62:
> Couldn't match type ‘Text’ with ‘(a1, Syllable)’
> Expected type: [([(a1, Syllable)], [Sequence Bol])]
>   Actual type: [([Syllable], [Sequence Bol])]
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] The evil GADTs extension in ghci 7.8.4 (maybe in other versions too?)

2015-06-04 Thread Edward Z. Yang
GHC used to always generalize let-bindings, but our experience
with GADTs lead us to decide that let should not be generalized
with GADTs.  So, it's not like we /wanted/ MonoLocalBinds, but
that having them makes the GADT machinery simpler.

This blog post gives more details on the matter:
https://ghc.haskell.org/trac/ghc/blog/LetGeneralisationInGhc7

Edward

Excerpts from Ki Yung Ahn's message of 2015-06-04 20:37:27 -0700:
 Such order dependent could be very confusing for the users. I thought I 
 turned off certain feature but some other extension turning it on is 
 strange. Wouldn't it be better to decouple GADT and MonoLocalBinds?
 
 2015년 06월 04일 20:31에 Edward Z. Yang 이(가) 쓴 글:
  This is because -XGADTs implies -XMonoLocalBinds.
 
  Edward
 
  Excerpts from Ki Yung Ahn's message of 2015-06-04 20:29:50 -0700:
  \y - let x = (\z - y) in x x
 
  is a perfectly fine there whose type is  a - a.
  (1) With no options, ghci infers its type correctly.
  (2) However, with -XGADTs, type check fails and raises occurs check.
  (3) We can remedy this by supplying some additional options
  (4) Howver, if you put -XGADTs option at the end, it fails again :(
 
 
  kyagrd@kyahp:~$ ghci
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
  \y - let x = (\z - y) in x x :: t - t
  Prelude :q
  Leaving GHCi.
 
 
  kyagrd@kyahp:~$ ghci -XGADTs
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
 
  interactive:1:30:
Occurs check: cannot construct the infinite type: t0 ~ t0 - t
Relevant bindings include
  x :: t0 - t (bound at interactive:1:11)
  y :: t (bound at interactive:1:2)
In the first argument of ‘x’, namely ‘x’
In the expression: x x
  Prelude :q
  Leaving GHCi.
 
 
  ~$ ghci -XGADTs -XNoMonoLocalBinds -XNoMonomorphismRestriction
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
  \y - let x = (\z - y) in x x :: t - t
  Prelude :q
  Leaving GHCi.
 
 
  ~$ ghci -XNoMonoLocalBinds -XNoMonomorphismRestriction -XGADTs
  GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
  Loading package ghc-prim ... linking ... done.
  Loading package integer-gmp ... linking ... done.
  Loading package base ... linking ... done.
  Prelude :t \y - let x = (\z - y) in x x
 
  interactive:1:30:
Occurs check: cannot construct the infinite type: t0 ~ t0 - t
Relevant bindings include
  x :: t0 - t (bound at interactive:1:11)
  y :: t (bound at interactive:1:2)
In the first argument of ‘x’, namely ‘x’
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
 
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] The evil GADTs extension in ghci 7.8.4 (maybe in other versions too?)

2015-06-04 Thread Edward Z. Yang
This is because -XGADTs implies -XMonoLocalBinds.

Edward

Excerpts from Ki Yung Ahn's message of 2015-06-04 20:29:50 -0700:
 \y - let x = (\z - y) in x x
 
 is a perfectly fine there whose type is  a - a.
 (1) With no options, ghci infers its type correctly.
 (2) However, with -XGADTs, type check fails and raises occurs check.
 (3) We can remedy this by supplying some additional options
 (4) Howver, if you put -XGADTs option at the end, it fails again :(
 
 
 kyagrd@kyahp:~$ ghci
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 \y - let x = (\z - y) in x x :: t - t
 Prelude :q
 Leaving GHCi.
 
 
 kyagrd@kyahp:~$ ghci -XGADTs
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 
 interactive:1:30:
  Occurs check: cannot construct the infinite type: t0 ~ t0 - t
  Relevant bindings include
x :: t0 - t (bound at interactive:1:11)
y :: t (bound at interactive:1:2)
  In the first argument of ‘x’, namely ‘x’
  In the expression: x x
 Prelude :q
 Leaving GHCi.
 
 
 ~$ ghci -XGADTs -XNoMonoLocalBinds -XNoMonomorphismRestriction
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 \y - let x = (\z - y) in x x :: t - t
 Prelude :q
 Leaving GHCi.
 
 
 ~$ ghci -XNoMonoLocalBinds -XNoMonomorphismRestriction -XGADTs
 GHCi, version 7.8.4: http://www.haskell.org/ghc/  :? for help
 Loading package ghc-prim ... linking ... done.
 Loading package integer-gmp ... linking ... done.
 Loading package base ... linking ... done.
 Prelude :t \y - let x = (\z - y) in x x
 
 interactive:1:30:
  Occurs check: cannot construct the infinite type: t0 ~ t0 - t
  Relevant bindings include
x :: t0 - t (bound at interactive:1:11)
y :: t (bound at interactive:1:2)
  In the first argument of ‘x’, namely ‘x’
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: SRC_HC_OPTS in perf build

2015-05-26 Thread Edward Z. Yang
Sounds like an oversight to me!  Submit a fix?

Excerpts from Jeremy's message of 2015-05-25 06:44:10 -0700:
 build.mk.sample contains the lines:
 
 # perf matches the default settings, repeated here for comparison:
 SRC_HC_OPTS = -O -H64m
 
 However, in config.mk.in this is:
 
 SRC_HC_OPTS += -H32m -O
 
 What actually is the default for SRC_HC_OPTS? Why does config.mk.in seem to
 set it to -H32m, then every profile in build.mk.sample sets -H64m?
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: runghc and GhcWithInterpreter

2015-04-06 Thread Edward Z. Yang
No, it's not supposed to work, since runghc interprets GHC code.
runghc itself is just a little shell script which calls GHC proper
with the -f flag, so I suppose the build system was just not set
up to not create this link in that case.

Edward

Excerpts from Jeremy's message of 2015-04-06 07:34:34 -0700:
 I've built GHC with GhcWithInterpreter = NO. runghc is built and installed,
 but errors out with not built for interactive use.
 
 Is runghc supposed to work with such a build? If not, why is it built at
 all?
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Binary bloat in 7.10

2015-04-01 Thread Edward Z. Yang
Yes, this does seem like a potential culprit, although
we did do some measurements and I didn't think it was too bad.
Maybe we were wrong!

Edward

Excerpts from Jeremy's message of 2015-04-01 07:26:55 -0700:
 Carter Schonwald wrote
  How much of this might be attributable to longer linker symbol names? Ghc
  7.10 object  code does have larger symbols!  Is there a way to easily
  tabulate that?
 
 That would explain why the hi files have also increased many-fold. Is there
 any way to avoid the larger symbols?
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users


Re: Found hole

2015-01-20 Thread Edward Z. Yang
Hello Volker,

All identifiers prefixed with an underscore are typed holes,
see:
https://downloads.haskell.org/~ghc/7.8.3/docs/html/users_guide/typed-holes.html

Edward

Excerpts from Volker Wysk's message of 2015-01-20 10:36:09 -0800:
 Hello!
 
 What is a hole? 
 
 This program fails to compile:
 
 main = _exit 0
 
 I get this error message:
 
 ex.hs:1:8:
 Found hole ‘_exit’ with type: t
 Where: ‘t’ is a rigid type variable bound by
the inferred type of main :: t at ex.hs:1:1
 Relevant bindings include main :: t (bound at ex.hs:1:1)
 In the expression: _exit
 In an equation for ‘main’: main = _exit
 
 When I replace _exit with foo, it produces a not in scope error, as 
 expected. What is special about _exit? It doesn't occur in the Haskell 
 Hierarchical Libraries.
 
 Bye
 Volker
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.10 regression when using foldr

2015-01-20 Thread Edward Z. Yang
I like this proposal: if you're explicit about an import that
would otherwise be implicit by Prelude, you shouldn't get a
warning for it. If it is not already the case, we also need to
make sure the implicit Prelude import never causes unused import
errors.

Edward

Excerpts from Edward Kmett's message of 2015-01-20 15:41:13 -0800:
 Sure.
 
 Adding it to the CHANGELOG makes a lot of sense. I first found out about it
 only a few weeks ago when Herbert mentioned it in passing.
 
 Of course, the geek in me definitely prefers technical fixes to human ones.
 Humans are messy. =)
 
 I'd be curious how much of the current suite of warnings could be fixed
 just by switching the implicit Prelude import to the end of the import list
 inside GHC.
 
 Now that Herbert has all of his crazy tooling to build stuff with 7.10 and
 with HEAD, it might be worth trying out such a change to see how much it
 reduces the warning volume and if it somehow manages to introduce any new
 warnings.
 
 I hesitate to make such a proposal this late in the release candidate game,
 but if it worked it'd be pretty damn compelling.
 
 -Edward
 
 On Tue, Jan 20, 2015 at 6:27 PM, Edward Z. Yang ezy...@mit.edu wrote:
 
  Hello Edward,
 
  Shouldn't we publicize this trick? Perhaps in the changelog?
 
  Edward
 
  Excerpts from Edward Kmett's message of 2015-01-20 15:22:57 -0800:
   Building -Wall clean across this change-over has a big of a trick to it.
  
   The easiest way I know of when folks already had lots of
  
   import Data.Foldable
   import Data.Traversable
  
   stuff
  
   is to just add
  
   import Prelude
  
   explicitly to the bottom of your import list rather than painstakingly
   exclude the imports with CPP.
  
   This has the benefit of not needing a bunch of CPP to manage what names
   come from where.
  
   Why? GHC checks that the imports provide something 'new' that is used by
   the module in a top-down fashion, and you are almost suredly using
   something from Prelude that didn't come from one of the modules above.
  
   On the other hand the implicit import of Prelude effectively would come
   first in the list.
  
   It is a dirty trick, but it does neatly side-step this problem for folks
  in
   your situation.
  
   -Edward
  
   On Tue, Jan 20, 2015 at 6:12 PM, Bryan O'Sullivan b...@serpentine.com
   wrote:
  
   
On Tue, Jan 20, 2015 at 3:02 PM, Herbert Valerio Riedel h...@gnu.org
wrote:
   
I'm a bit confused, several past attoparsec versions seem to build
  just
fine with GHC 7.10:
   
  https://ghc.haskell.org/~hvr/buildreports/attoparsec.html
   
were there hidden breakages not resulting in compile errors?
Or are the fixes you mention about restoring -Wall hygiene?
   
   
I build with -Wall -Werror, and also have to maintain the test and
benchmark suites.
   
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.10 regression when using foldr

2015-01-20 Thread Edward Z. Yang
I don't see why that would be the case: we haven't *excluded* any
old import lists, so -ddump-minimal-imports could still
take advantage of Prelude in a warning-free way.

Edward

Excerpts from Edward Kmett's message of 2015-01-20 16:36:53 -0800:
 It isn't without a cost. On the down-side, the results of
 -ddump-minimal-imports would be er.. less minimal.
 
 On Tue, Jan 20, 2015 at 6:47 PM, Edward Z. Yang ezy...@mit.edu wrote:
 
  I like this proposal: if you're explicit about an import that
  would otherwise be implicit by Prelude, you shouldn't get a
  warning for it. If it is not already the case, we also need to
  make sure the implicit Prelude import never causes unused import
  errors.
 
  Edward
 
  Excerpts from Edward Kmett's message of 2015-01-20 15:41:13 -0800:
   Sure.
  
   Adding it to the CHANGELOG makes a lot of sense. I first found out about
  it
   only a few weeks ago when Herbert mentioned it in passing.
  
   Of course, the geek in me definitely prefers technical fixes to human
  ones.
   Humans are messy. =)
  
   I'd be curious how much of the current suite of warnings could be fixed
   just by switching the implicit Prelude import to the end of the import
  list
   inside GHC.
  
   Now that Herbert has all of his crazy tooling to build stuff with 7.10
  and
   with HEAD, it might be worth trying out such a change to see how much it
   reduces the warning volume and if it somehow manages to introduce any new
   warnings.
  
   I hesitate to make such a proposal this late in the release candidate
  game,
   but if it worked it'd be pretty damn compelling.
  
   -Edward
  
   On Tue, Jan 20, 2015 at 6:27 PM, Edward Z. Yang ezy...@mit.edu wrote:
  
Hello Edward,
   
Shouldn't we publicize this trick? Perhaps in the changelog?
   
Edward
   
Excerpts from Edward Kmett's message of 2015-01-20 15:22:57 -0800:
 Building -Wall clean across this change-over has a big of a trick to
  it.

 The easiest way I know of when folks already had lots of

 import Data.Foldable
 import Data.Traversable

 stuff

 is to just add

 import Prelude

 explicitly to the bottom of your import list rather than
  painstakingly
 exclude the imports with CPP.

 This has the benefit of not needing a bunch of CPP to manage what
  names
 come from where.

 Why? GHC checks that the imports provide something 'new' that is
  used by
 the module in a top-down fashion, and you are almost suredly using
 something from Prelude that didn't come from one of the modules
  above.

 On the other hand the implicit import of Prelude effectively would
  come
 first in the list.

 It is a dirty trick, but it does neatly side-step this problem for
  folks
in
 your situation.

 -Edward

 On Tue, Jan 20, 2015 at 6:12 PM, Bryan O'Sullivan 
  b...@serpentine.com
 wrote:

 
  On Tue, Jan 20, 2015 at 3:02 PM, Herbert Valerio Riedel 
  h...@gnu.org
  wrote:
 
  I'm a bit confused, several past attoparsec versions seem to build
just
  fine with GHC 7.10:
 
https://ghc.haskell.org/~hvr/buildreports/attoparsec.html
 
  were there hidden breakages not resulting in compile errors?
  Or are the fixes you mention about restoring -Wall hygiene?
 
 
  I build with -Wall -Werror, and also have to maintain the test and
  benchmark suites.
 
   
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Compiling a cabal project with LLVM on GHC 7.10 RC1

2015-01-07 Thread Edward Z. Yang
...is there -dynamic in the -v output?  Don't you also want
--disable-shared?

Excerpts from Brandon Simmons's message of 2015-01-07 12:21:48 -0800:
 I've tried:
 
   $ cabal install --only-dependencies -w
 /usr/local/bin/ghc-7.10.0.20141222  --enable-tests --enable-benchmarks
 --ghc-option=-fllvm --ghc-option=-static
   $ cabal configure -w /usr/local/bin/ghc-7.10.0.20141222
 --enable-tests --enable-benchmarks --ghc-option=-fllvm
 --ghc-option=-static
   $ cabal build
   Building foo-0.3.0.0...
   Preprocessing library foo-0.3.0.0...
 
   when making flags consistent: Warning:
   Using native code generator rather than LLVM, as LLVM is
 incompatible with -fPIC and -dynamic   on this platform
 
 I don't see anything referencing PIC in the output of cabal build
 -v. I can build a hello world program, just fine with `ghc --make`:
 
   $ /usr/local/bin/ghc-7.10.0.20141222 --make -O2 -fllvm   Main.hs
   [1 of 1] Compiling Main ( Main.hs, Main.o )
   Linking Main ...
 
 Thanks,
 Brandon
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.4.2 on Ubuntu Trusty

2015-01-04 Thread Edward Z. Yang
For transformers, I needed:

diff --git a/Control/Monad/Trans/Error.hs b/Control/Monad/Trans/Error.hs
index 0158a8a..0dea478 100644
--- a/Control/Monad/Trans/Error.hs
+++ b/Control/Monad/Trans/Error.hs
@@ -57,6 +57,10 @@ instance MonadPlus IO where
 mzero   = ioError (userError mzero)
 m `mplus` n = m `catchIOError` \_ - n
 
+instance Alternative IO where
+empty = mzero
+(|) = mplus
+
 #if !(MIN_VERSION_base(4,4,0))
 -- exported by System.IO.Error from base-4.4
 catchIOError :: IO a - (IOError - IO a) - IO a

For hpc, I needed:

 Build-Depends:
-base   = 4.4.1   4.8,
+base   = 4.4.1   4.9,
 containers = 0.4.1   0.6,
 directory  = 1.1 1.3,
-time   = 1.2 1.5
+time   = 1.2 1.6

For hoopl, I needed:

-  Build-Depends: base = 4.3   4.8
+  Build-Depends: base = 4.3   4.9

For the latter two, I think this should be a perfectly acceptable
point release.  For transformers, we could also just ifdef the
Alternative into the GHC sources.

Edward

Excerpts from Herbert Valerio Riedel's message of 2015-01-04 00:22:28 -0800:
 Hello Edward,
 
 On 2015-01-04 at 08:54:58 +0100, Edward Z. Yang wrote:
 
 [...]
 
  There are also some changes to hoopl, transformers and hpc (mostly
  because their bootstrap libraries.)
 
 ...what kind of changes specifically? 
 
 Once thing that needs to be considered is that we'd require to upstream
 changes to transformers (it's not under GHC HQ's direct control) for a
 transformers point(?) release ... and we'd need that as we can't release
 any source-tarball that contains libraries (which get installed into the
 pkg-db) that don't match their upstream version on Hackage.
 
 Cheers,
   hvr
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.4.2 on Ubuntu Trusty

2015-01-03 Thread Edward Z. Yang
Hey guys,

I have a local branch of ghc-7.8 which can be compiled by 7.10.
The most annoying patch that needed to be backported was AMP
adjustment changes.  I also messed up some stuff involving LANGUAGE
pragmas which I am going to go back and clean up.

https://github.com/ezyang/ghc/tree/ghc-7.8

There are also some changes to hoopl, transformers and hpc (mostly
because their bootstrap libraries.)

Unfortunately I can't easily Phab these changes.  Any suggestions
for how to coordinate landing these changes?

Edward

Excerpts from Yitzchak Gale's message of 2014-12-28 13:38:47 -0500:
 Resurrecting this thread:
 
 My impression was that Edward's suggestion was a simple and
 obvious solution to the problem of previous GHC versions quickly
 becoming orphaned and unbuildable. But Austin thought that this
 thread was stuck.
 
 Would Edward's suggestion be difficult to implement for any
 reason? Specifically, right now would be the time to do it, and
 it would mean:
 
 1. Create a 7.8.5 branch.
 2. Tweak the stage 1 Haskell sources to build with 7.10 and tag
 3. Create only a source tarball and upload it to the download
 site
 
 Thanks,
 Yitz
 
 On Wed, Oct 29, 2014 at 12:10 AM, Edward Z. Yang wrote:
  Excerpts from Yitzchak Gale's message of 2014-10-28 13:58:08 -0700:
  How about this: Currently, every GHC source distribution
  requires no later than its own version of GHC for bootstrapping.
  Going backwards, that chops up the sequence of GHC versions
  into tiny incompatible pieces - there is no way to start with a
  working GHC and work backwards to an older version by compiling
  successively older GHC sources.
 
  If instead each GHC could be compiled using at least one
  subsequent version, the chain would not be broken. I.e.,
  always provide a compatibility flag or some other reasonably
  simple mechanism that would enable the current GHC to
  compile the source code of at least the last previous released
  version.
 
  Here is an alternate proposal: when we make a new major version release,
  we should also make a minor version release of the previous series, which
  is prepped so that it can compile from the new major version.  If it
  is the case that one version of the compiler can compile any other
  version in the same series, this would be sufficient to go backwards.
 
  Concretely, the action plan is very simple too: take 7.6 and apply as
  many patches as is necessary to make it compile from 7.8, and cut
  a release with those patches.
 
  Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install

2015-01-01 Thread Edward Z. Yang
If you still have your old GHC around, it will be much better to
compile the newest cabal-install using the *old GHC*, and then
use that copy to bootstrap a copy of the newest cabal-install.

Edward

Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500:
 ​$ ​
 cabal update
 Downloading the latest package list from hackage.haskell.org
 Note: *there is a new version of cabal-install available.*
 To upgrade, run: cabal install cabal-install
 bash-3.2$ *cabal install -j3 cabal-install *
 *​...​*
 
 
 *Resolving dependencies...cabal: Could not resolve dependencies:*
 trying: cabal-install-1.20.0.6 (user goal)
 trying: base-4.8.0.0/installed-779... (dependency of cabal-install-1.20.0.6)
 next goal: process (dependency of cabal-install-1.20.0.6)
 rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0,
 process
 = unix==2.7.1.0/installed-4ae...)
 trying: process-1.2.1.0
 next goal: directory (dependency of cabal-install-1.20.0.6)
 rejecting: directory-1.2.1.1/installed-b08... (conflict: directory =
 time==1.5.0.1/installed-c23..., cabal-install = time=1.1  1.5)
 rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.5  4.8)
 rejecting: directory-1.2.0.1, 1.2.0.0 (conflict:
 base==4.8.0.0/installed-779..., directory = base=4.2  4.7)
 rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.2  4.6)
 rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.2  4.5)
 rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779...,
 directory = base=4.2  4.4)
 rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0 (conflict:
 process = directory=1.1  1.3)
 Dependency tree exhaustively searched.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.10.1 Release Candidate 1 - problem with latest cabal-install

2015-01-01 Thread Edward Z. Yang
Oh, because Cabal HQ hasn't cut a release yet.

Try installing out of Git.  https://github.com/haskell/cabal/

Edward

Excerpts from George Colpitts's message of 2015-01-01 14:23:50 -0500:
 I still have 7.8.3 but it doesn't seem to want to build the latest cabal:
 
  ghc --version
 The Glorious Glasgow Haskell Compilation System, version 7.8.3
 bash-3.2$ cabal install cabal-install
 Resolving dependencies...
 Configuring cabal-install-1.20.0.6...
 Building cabal-install-1.20.0.6...
 Installed cabal-install-1.20.0.6
 Updating documentation index
 /Users/gcolpitts/Library/Haskell/share/doc/index.html
 
 On Thu, Jan 1, 2015 at 2:54 PM, Edward Z. Yang ezy...@mit.edu wrote:
 
  If you still have your old GHC around, it will be much better to
  compile the newest cabal-install using the *old GHC*, and then
  use that copy to bootstrap a copy of the newest cabal-install.
 
  Edward
 
  Excerpts from George Colpitts's message of 2015-01-01 12:08:44 -0500:
   ​$ ​
   cabal update
   Downloading the latest package list from hackage.haskell.org
   Note: *there is a new version of cabal-install available.*
   To upgrade, run: cabal install cabal-install
   bash-3.2$ *cabal install -j3 cabal-install *
   *​...​*
  
  
   *Resolving dependencies...cabal: Could not resolve dependencies:*
   trying: cabal-install-1.20.0.6 (user goal)
   trying: base-4.8.0.0/installed-779... (dependency of
  cabal-install-1.20.0.6)
   next goal: process (dependency of cabal-install-1.20.0.6)
   rejecting: process-1.2.1.0/installed-2db... (conflict: unix==2.7.1.0,
   process
   = unix==2.7.1.0/installed-4ae...)
   trying: process-1.2.1.0
   next goal: directory (dependency of cabal-install-1.20.0.6)
   rejecting: directory-1.2.1.1/installed-b08... (conflict: directory =
   time==1.5.0.1/installed-c23..., cabal-install = time=1.1  1.5)
   rejecting: directory-1.2.1.0 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.5  4.8)
   rejecting: directory-1.2.0.1, 1.2.0.0 (conflict:
   base==4.8.0.0/installed-779..., directory = base=4.2  4.7)
   rejecting: directory-1.1.0.2 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.2  4.6)
   rejecting: directory-1.1.0.1 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.2  4.5)
   rejecting: directory-1.1.0.0 (conflict: base==4.8.0.0/installed-779...,
   directory = base=4.2  4.4)
   rejecting: directory-1.0.1.2, 1.0.1.1, 1.0.1.0, 1.0.0.3, 1.0.0.0
  (conflict:
   process = directory=1.1  1.3)
   Dependency tree exhaustively searched.
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ANNOUNCE: GHC 7.10.1 Release Candidate 1

2014-12-27 Thread Edward Z. Yang
Hello lonetiger,

I don't think any relevant logic changed in 7.10; however, this
commit may be relevant:

commit 8fb03bfd768ea0d5c666bbe07a50cb05214bbe92
Author: Ian Lynagh ig...@earth.li  Sun Mar 18 11:42:31 2012
Committer:  Ian Lynagh ig...@earth.li  Sun Mar 18 11:42:31 2012
Original File:  compiler/typecheck/TcForeign.lhs

If we say we're treating StdCall as CCall, then actually do so

But this warning should have applied even on older versions of GHC.

Are you running x86_64 Windows?  stdcall is specific to x86_32.

Edward

Excerpts from lonetiger's message of 2014-12-24 08:24:52 -0500:
 Hi,
 
 
 I’ve had some issues building this (and the git HEAD), it seems that the 
 config.guess and config.sub in the libffi tarball is old, it doesn’t detect 
 the platform when building with msys2. I had to unpack the tarfile and update 
 the files, after this it correctly built.
 
 
 Then I proceeded to try to make a shared library and got the following 
 warning:
 
 
 ManualCheck.hs:18:1: Warning:
 the 'stdcall' calling convention is unsupported on this platform,
 treating as ccall
 When checking declaration:
   foreign export stdcall testFoo testFooA :: CInt - IO (FooPtr)
 
 
 
 Does this mean that GHC no longer supports stdcall on windows? or could this 
 be related to issue I had building?
 
 
 Regards,
 
 Tamar
 
 
 
 
 
 From: Austin Seipp
 Sent: ‎Tuesday‎, ‎December‎ ‎23‎, ‎2014 ‎15‎:‎36
 To: ghc-d...@haskell.org, glasgow-haskell-users@haskell.org
 
 
 
 
 
 We are pleased to announce the first release candidate for GHC 7.10.1:
 
 https://downloads.haskell.org/~ghc/7.10.1-rc1/
 
 This includes the source tarball and bindists for 64bit/32bit Linux
 and Windows. Binary builds for other platforms will be available
 shortly. (CentOS 6.5 binaries are not available at this time like they
 were for 7.8.x). These binaries and tarballs have an accompanying
 SHA256SUMS file signed by my GPG key id (0x3B58D86F).
 
 We plan to make the 7.10.1 release sometime in February of 2015. We
 expect another RC to occur during January of 2015.
 
 Please test as much as possible; bugs are much cheaper if we find them
 before the release!
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Thread behavior in 7.8.3

2014-10-29 Thread Edward Z. Yang
I don't think this is directly related to the problem, but if you have a
thread that isn't yielding, you can force it to yield by using
-fno-omit-yields on your code.  It won't help if the non-yielding code
is in a library, and it won't help if the problem was that you just
weren't setting timeouts finely enough (which sounds like what was
happening). FYI.

Edward

Excerpts from John Lato's message of 2014-10-29 17:19:46 -0700:
 I guess I should explain what that flag does...
 
 The GHC RTS maintains capabilities, the number of capabilities is specified
 by the `+RTS -N` option.  Each capability is a virtual machine that
 executes Haskell code, and maintains its own runqueue of threads to process.
 
 A capability will perform a context switch at the next heap block
 allocation (every 4k of allocation) after the timer expires.  The timer
 defaults to 20ms, and can be set by the -C flag.  Capabilities perform
 context switches in other circumstances as well, such as when a thread
 yields or blocks.
 
 My guess is that either the context switching logic changed in ghc-7.8, or
 possibly your code used to trigger a switch via some other mechanism (stack
 overflow or something maybe?), but is optimized differently now so instead
 it needs to wait for the timer to expire.
 
 The problem we had was that a time-sensitive thread was getting scheduled
 on the same capability as a long-running non-yielding thread, so the
 time-sensitive thread had to wait for a context switch timeout (even though
 there were free cores available!).  I expect even with -N4 you'll still see
 occasional delays (perhaps 5% of calls).
 
 We've solved our problem with judicious use of `forkOn`, but that won't
 help at N1.
 
 We did see this behavior in 7.6, but it's definitely worse in 7.8.
 
 Incidentally, has there been any interest in a work-stealing scheduler?
 There was a discussion from about 2 years ago, in which Simon Marlow noted
 it might be tricky, but it would definitely help in situations like this.
 
 John L.
 
 On Thu, Oct 30, 2014 at 8:02 AM, Michael Jones m...@proclivis.com wrote:
 
  John,
 
  Adding -C0.005 makes it much better. Using -C0.001 makes it behave more
  like -N4.
 
  Thanks. This saves my project, as I need to deploy on a single core Atom
  and was stuck.
 
  Mike
 
  On Oct 29, 2014, at 5:12 PM, John Lato jwl...@gmail.com wrote:
 
  By any chance do the delays get shorter if you run your program with `+RTS
  -C0.005` ?  If so, I suspect you're having a problem very similar to one
  that we had with ghc-7.8 (7.6 too, but it's worse on ghc-7.8 for some
  reason), involving possible misbehavior of the thread scheduler.
 
  On Wed, Oct 29, 2014 at 2:18 PM, Michael Jones m...@proclivis.com wrote:
 
  I have a general question about thread behavior in 7.8.3 vs 7.6.X
 
  I moved from 7.6 to 7.8 and my application behaves very differently. I
  have three threads, an application thread that plots data with wxhaskell or
  sends it over a network (depends on settings), a thread doing usb bulk
  writes, and a thread doing usb bulk reads. Data is moved around with TChan,
  and TVar is used for coordination.
 
  When the application was compiled with 7.6, my stream of usb traffic was
  smooth. With 7.8, there are lots of delays where nothing seems to be
  running. These delays are up to 40ms, whereas with 7.6 delays were a 1ms or
  so.
 
  When I add -N2 or -N4, the 7.8 program runs fine. But on 7.6 it runs fine
  without with -N2/4.
 
  The program is compiled -O2 with profiling. The -N2/4 version uses more
  memory,  but in both cases with 7.8 and with 7.6 there is no space leak.
 
  I tired to compile and use -ls so I could take a look with threadscope,
  but the application hangs and writes no data to the file. The CPU fans run
  wild like it is in an infinite loop. It at least pops an unpainted
  wxhaskell window, so it got partially running.
 
  One of my libraries uses option -fsimpl-tick-factor=200 to get around the
  compiler.
 
  What do I need to know about changes to threading and event logging
  between 7.6 and 7.8? Is there some general documentation somewhere that
  might help?
 
  I am on Ubuntu 14.04 LTS. I downloaded the 7.8 tool chain tar ball and
  installed myself, after removing 7.6 with apt-get.
 
  Any hints appreciated.
 
  Mike
 
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 
 
 
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Thread behavior in 7.8.3

2014-10-29 Thread Edward Z. Yang
Yes, that's right.

I brought it up because you mentioned that there might still be
occasional delays, and those might be caused by a thread not being
preemptible for a while.

Edward

Excerpts from John Lato's message of 2014-10-29 17:31:45 -0700:
 My understanding is that -fno-omit-yields is subtly different.  I think
 that's for the case when a function loops without performing any heap
 allocations, and thus would never yield even after the context switch
 timeout.  In my case the looping function does perform heap allocations and
 does eventually yield, just not until after the timeout.
 
 Is that understanding correct?
 
 (technically, doesn't it change to yielding after stack checks or something
 like that?)
 
 On Thu, Oct 30, 2014 at 8:24 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  I don't think this is directly related to the problem, but if you have a
  thread that isn't yielding, you can force it to yield by using
  -fno-omit-yields on your code.  It won't help if the non-yielding code
  is in a library, and it won't help if the problem was that you just
  weren't setting timeouts finely enough (which sounds like what was
  happening). FYI.
 
  Edward
 
  Excerpts from John Lato's message of 2014-10-29 17:19:46 -0700:
   I guess I should explain what that flag does...
  
   The GHC RTS maintains capabilities, the number of capabilities is
  specified
   by the `+RTS -N` option.  Each capability is a virtual machine that
   executes Haskell code, and maintains its own runqueue of threads to
  process.
  
   A capability will perform a context switch at the next heap block
   allocation (every 4k of allocation) after the timer expires.  The timer
   defaults to 20ms, and can be set by the -C flag.  Capabilities perform
   context switches in other circumstances as well, such as when a thread
   yields or blocks.
  
   My guess is that either the context switching logic changed in ghc-7.8,
  or
   possibly your code used to trigger a switch via some other mechanism
  (stack
   overflow or something maybe?), but is optimized differently now so
  instead
   it needs to wait for the timer to expire.
  
   The problem we had was that a time-sensitive thread was getting scheduled
   on the same capability as a long-running non-yielding thread, so the
   time-sensitive thread had to wait for a context switch timeout (even
  though
   there were free cores available!).  I expect even with -N4 you'll still
  see
   occasional delays (perhaps 5% of calls).
  
   We've solved our problem with judicious use of `forkOn`, but that won't
   help at N1.
  
   We did see this behavior in 7.6, but it's definitely worse in 7.8.
  
   Incidentally, has there been any interest in a work-stealing scheduler?
   There was a discussion from about 2 years ago, in which Simon Marlow
  noted
   it might be tricky, but it would definitely help in situations like this.
  
   John L.
  
   On Thu, Oct 30, 2014 at 8:02 AM, Michael Jones m...@proclivis.com
  wrote:
  
John,
   
Adding -C0.005 makes it much better. Using -C0.001 makes it behave more
like -N4.
   
Thanks. This saves my project, as I need to deploy on a single core
  Atom
and was stuck.
   
Mike
   
On Oct 29, 2014, at 5:12 PM, John Lato jwl...@gmail.com wrote:
   
By any chance do the delays get shorter if you run your program with
  `+RTS
-C0.005` ?  If so, I suspect you're having a problem very similar to
  one
that we had with ghc-7.8 (7.6 too, but it's worse on ghc-7.8 for some
reason), involving possible misbehavior of the thread scheduler.
   
On Wed, Oct 29, 2014 at 2:18 PM, Michael Jones m...@proclivis.com
  wrote:
   
I have a general question about thread behavior in 7.8.3 vs 7.6.X
   
I moved from 7.6 to 7.8 and my application behaves very differently. I
have three threads, an application thread that plots data with
  wxhaskell or
sends it over a network (depends on settings), a thread doing usb bulk
writes, and a thread doing usb bulk reads. Data is moved around with
  TChan,
and TVar is used for coordination.
   
When the application was compiled with 7.6, my stream of usb traffic
  was
smooth. With 7.8, there are lots of delays where nothing seems to be
running. These delays are up to 40ms, whereas with 7.6 delays were a
  1ms or
so.
   
When I add -N2 or -N4, the 7.8 program runs fine. But on 7.6 it runs
  fine
without with -N2/4.
   
The program is compiled -O2 with profiling. The -N2/4 version uses
  more
memory,  but in both cases with 7.8 and with 7.6 there is no space
  leak.
   
I tired to compile and use -ls so I could take a look with
  threadscope,
but the application hangs and writes no data to the file. The CPU
  fans run
wild like it is in an infinite loop. It at least pops an unpainted
wxhaskell window, so it got partially running.
   
One of my libraries uses option -fsimpl-tick-factor=200 to get around

Re: GHC 7.4.2 on Ubuntu Trusty

2014-10-28 Thread Edward Z. Yang
Excerpts from Yitzchak Gale's message of 2014-10-28 13:58:08 -0700:
 How about this: Currently, every GHC source distribution
 requires no later than its own version of GHC for bootstrapping.
 Going backwards, that chops up the sequence of GHC versions
 into tiny incompatible pieces - there is no way to start with a
 working GHC and work backwards to an older version by compiling
 successively older GHC sources.
 
 If instead each GHC could be compiled using at least one
 subsequent version, the chain would not be broken. I.e.,
 always provide a compatibility flag or some other reasonably
 simple mechanism that would enable the current GHC to
 compile the source code of at least the last previous released
 version.

Here is an alternate proposal: when we make a new major version release,
we should also make a minor version release of the previous series, which
is prepped so that it can compile from the new major version.  If it
is the case that one version of the compiler can compile any other
version in the same series, this would be sufficient to go backwards.

Concretely, the action plan is very simple too: take 7.6 and apply as
many patches as is necessary to make it compile from 7.8, and cut
a release with those patches.

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: optimizing StgPtr allocate (Capability *cap, W_ n)

2014-10-16 Thread Edward Z. Yang
Hi Bulat,

This seems quite reasonable to me. Have you eyeballed the assembly
GCC produces to see that the hotpath is improved? If you can submit
a patch that would be great!

Cheers,
Edward

Excerpts from Bulat Ziganshin's message of 2014-10-14 10:08:59 -0700:
 Hello Glasgow-haskell-users,
 
 i'm looking a the 
 https://github.com/ghc/ghc/blob/23bb90460d7c963ee617d250fa0a33c6ac7bbc53/rts/sm/Storage.c#L680
 
 if i correctly understand, it's speed-critical routine?
 
 i think that it may be improved in this way:
 
 StgPtr allocate (Capability *cap, W_ n)
 {
 bdescr *bd;
 StgPtr p;
 
 TICK_ALLOC_HEAP_NOCTR(WDS(n));
 CCS_ALLOC(cap-r.rCCCS,n);
 
 /// here starts new improved code:
 
 bd = cap-r.rCurrentAlloc;
 if (bd == NULL || bd-free + n  bd-end) {
 if (n = LARGE_OBJECT_THRESHOLD/sizeof(W_)) {
 
 }
 if (bd-free + n = bd-start + BLOCK_SIZE_W)
 bd-end = min (bd-start + BLOCK_SIZE_W, bd-free + 
 LARGE_OBJECT_THRESHOLD)
 goto usual_alloc;
 }
 
 }
 
 /// and here it stops
 
 usual_alloc:
 p = bd-free;
 bd-free += n;
 
 IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa));
 return p;
 }
 
 
 i  think  it's  obvious - we consolidate two if's on the crirical path
 into the single one plus avoid one ADD by keeping highly-useful bd-end 
 pointer
 
 further   improvements   may   include   removing  bd==NULL  check  by
 initializing bd-free=bd-end=NULL   and   moving   entire   if body
 into   separate   slow_allocate()  procedure  marked  noinline  with
 allocate() probably marked as forceinline:
 
 StgPtr allocate (Capability *cap, W_ n)
 {
 bdescr *bd;
 StgPtr p;
 
 TICK_ALLOC_HEAP_NOCTR(WDS(n));
 CCS_ALLOC(cap-r.rCCCS,n);
 
 bd = cap-r.rCurrentAlloc;
 if (bd-free + n  bd-end)
 return slow_allocate(cap,n);
 
 p = bd-free;
 bd-free += n;
 
 IF_DEBUG(sanity, ASSERT(*((StgWord8*)p) == 0xaa));
 return p;
 }
 
 this  change  will  greatly simplify optimizer's work. according to my
 experience   current  C++  compilers  are  weak  on  optimizing  large
 functions with complex execution paths and such transformations really
 improve the generated code
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hmatrix

2014-08-24 Thread Edward Z . Yang
Hello Adrian,

This sounds like a definite bug in Cabal, in that it should report
accordingly if it is looking for both static and dynamic versions
of the library, and only finds the static one.  Can you file a bug
report?

Thanks,
Edward

Excerpts from Adrian Victor Crisciu's message of 2014-08-23 23:45:48 +0100:
 After 3 days of frustrating trials and errors, I managed to install the new
 hmatrix package on Slackware 13.1. I post this message in case anyone else
 hits the same problem, as the process requires some alteration of the
 standard build process of ATLAS, LAPACK, hmatrix and hmatrix-gsl. The
 following steps assume that LAPACK is built against an optimized ATLAS
 library.
 
 1.) By default, ATLAS builds only static libraries. However, hmatrix needs
 shared objects, so ATLAS should be configured with the --share option and,
 after the build is complete, the commands make shared and/ore make
 ptshared need to be issued in BUILDDIR/lib
 
 2.) LAPACK also buils by default only static libraries and, for the same
 reason as above, we need position independent conde in ALL the objects in
 liblapack. In order to do this we need to
   2.1.) Add -fPIC to OPTS, NOOPT and LOADOPT in LAPACKROOT/make.inc
2.2.) Change the BLASLIB macro in the same file to point to the
 optimized tatlas (os satlas) library
   2.3.) Add the target liblapack.so to SRC/Makefile:
   ../liblapack.so: $(ALLOBJ)
 gfortran -shared -W1 -o $@ $(ALLOBJ)
 (This step is a corected version of
 http://theoryno3.blogspot.ro/2010/12/compiling-lapack-as-shared-library-in.html
 )
 
 3.) Change the extra-libraries line in hmatrix.cabal to read:
   extra-libraries: tatlas lapack
 
 4.) Change the extra-library line in hmatrix-gsl to read:
extra-libraries: gslcblas gsl
 
 Again, this procedure worked for may Slackware 13.1 linux box, but I think
 it will work on any decent linux machine.
 
 Thanks everyone for your time and useful comments!
 Adrian Victor.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hmatrix-0.16.0.4 installation problem

2014-08-22 Thread Edward Z . Yang
Excerpts from Adrian Victor Crisciu's message of 2014-08-22 10:55:00 +0100:
 I tried the following command line:
 
 cabal install --enable-documentation
 --extra-include-dirs=/usr;local/include --extra-lib-dirs=/usr/local/lib
 hmatrix

Is that semicolon a typo?

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hmatrix-0.16.0.4 installation problem

2014-08-21 Thread Edward Z . Yang
Hello Adrian,

Are the header files for blas and lapack on your system? (I'm not sure
what the configure script for other software was checking for.)

Edward

Excerpts from Adrian Victor Crisciu's message of 2014-08-21 14:22:58 +0100:
 Sorry!
 
 This is the the failed cabal install command and its output: The blas
 (libcblas.so) and lapack (both liblapack.a and liblapack.so) are in
 /usr/local/lib64, so they can be easily found. And the configure script for
 other software did found them.
 
 cabal install --enable-documentation hmatrix
 
 Resolving dependencies...
 Configuring hmatrix-0.16.0.4...
 cabal: Missing dependencies on foreign libraries:
 * Missing C libraries: blas, lapack
 This problem can usually be solved by installing the system packages that
 provide these libraries (you may need the -dev versions). If the libraries
 are already installed but in a non-standard location then you can use the
 flags --extra-include-dirs= and --extra-lib-dirs= to specify where they are.
 Failed to install hmatrix-0.16.0.4
 cabal: Error: some packages failed to install:
 hmatrix-0.16.0.4 failed during the configure step. The exception was:
 ExitFailure 1
 
 Adrian-Victor
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 'import ccall unsafe' and parallelism

2014-08-14 Thread Edward Z . Yang
I have to agree with Brandon's diagnosis: unsafePerformIO will
take out a lock, which is likely why you are seeing no parallelism.

Edward

Excerpts from Brandon Allbery's message of 2014-08-14 17:12:00 +0100:
 On Thu, Aug 14, 2014 at 11:54 AM, Christian Höner zu Siederdissen 
 choe...@tbi.univie.ac.at wrote:
 
  go xs = unsafePerformIO $ do
forM_ xs $ cfun
return $ somethingUnhealthy
 
 
 I wonder if this is your real problem. `unsafePerformIO` does some extra
 locking; the FFI specifies a function `unsafeLocalState`, which in GHC is
 `unsafeDupablePerformIO` which skips the extra locking.
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: cabal repl failing silently on missing exposed-modules

2014-08-08 Thread Edward Z . Yang
If you haven't already, go file a bug on
https://github.com/haskell/cabal/issues

Edward

Excerpts from cheater00 .'s message of 2014-08-06 15:18:04 +0100:
 Hi,
 I have just spent some time trying to figure out why all of a sudden
 cabal repl silently exits without an error message. What helped was
 to take a project that could launch the repl and compare the cabal
 files to my new project. It turns out the exposed-modules entry was
 missing. I was wondering whether this behaviour was intentional, as I
 don't recollect this happening before, but I don't have older systems
 to test this on.
 
 The reason I wanted to run a repl without editing exposed modules was
 to test some dependencies I pulled in to the sandbox with cabal
 install. The package in question didn't have any code of its own yet.
 In this case I would just expect ghci to load with the Prelude.
 
 Thanks!
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Failure compiling ghc-mtl with ghc-7.8.{2,3}

2014-07-20 Thread Edward Z . Yang
The last time I saw this error, it was because the package database
was messed up (there was an instance of MonadIO in scope, but it
was for the wrong package.)  However, I don't know what the source
of the problem is here.

Edward

Excerpts from i hamsa's message of 2014-07-20 08:26:52 +0100:
 I was trying to upgrade to ghc-7.8 the other day, and got this
 compilation failure when building ghc-mtl-1.2.1.0 (see the end of the
 message).
 
 I'm using the haskell overlay on Gentoo Linux straight out of the box,
 no local cabal installations of anything.
 
 Now I was told that other people can compile ghc-mtl with 7.8 just
 fine, so there must be something broken in my specific configuration.
 What would be an effective way to approach the situation?
 
 In the sources I see that an instance of MonadIO GHC.Ghc does exist. I
 don't understand these errors. Are there multiple different MonadIO
 classes in different modules?
 
 Thank you and happy hacking.
 
 Now the errors:
 
 Control/Monad/Ghc.hs:42:15:
 No instance for (GHC.MonadIO Ghc)
   arising from the 'deriving' clause of a data type declaration
 Possible fix:
   use a standalone 'deriving instance' declaration,
 so you can specify the instance context yourself
 When deriving the instance for (GHC.ExceptionMonad Ghc)
 
 Control/Monad/Ghc.hs:46:15:
 No instance for (MonadIO GHC.Ghc)
   arising from the 'deriving' clause of a data type declaration
 Possible fix:
   use a standalone 'deriving instance' declaration,
 so you can specify the instance context yourself
 When deriving the instance for (MonadIO Ghc)
 
 Control/Monad/Ghc.hs:49:15:
 No instance for (GHC.MonadIO Ghc)
   arising from the 'deriving' clause of a data type declaration
 Possible fix:
   use a standalone 'deriving instance' declaration,
 so you can specify the instance context yourself
 When deriving the instance for (GHC.GhcMonad Ghc)
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Failure compiling ghc-mtl with ghc-7.8.{2,3}

2014-07-20 Thread Edward Z . Yang
It looks like you will have to install old versions of mtl/exceptions
which work on transformers-0.3.0.0, although undoubtedly the real
problem is that GHC should update what version of transformers it
is distributing.

Edawrd

Excerpts from i hamsa's message of 2014-07-20 19:25:36 +0100:
 I think I found the problem.
 
 package ghc-7.8.3 requires transformers-0.3.0.0
 package mtl-2.2.1 requires transformers-0.4.1.0
 package exceptions-0.6.1 requires transformers-0.4.1.0
 
 I wonder how is this ever supposed to work :(
 
 On Sun, Jul 20, 2014 at 9:01 PM, Edward Z. Yang ezy...@mit.edu wrote:
  The last time I saw this error, it was because the package database
  was messed up (there was an instance of MonadIO in scope, but it
  was for the wrong package.)  However, I don't know what the source
  of the problem is here.
 
  Edward
 
  Excerpts from i hamsa's message of 2014-07-20 08:26:52 +0100:
  I was trying to upgrade to ghc-7.8 the other day, and got this
  compilation failure when building ghc-mtl-1.2.1.0 (see the end of the
  message).
 
  I'm using the haskell overlay on Gentoo Linux straight out of the box,
  no local cabal installations of anything.
 
  Now I was told that other people can compile ghc-mtl with 7.8 just
  fine, so there must be something broken in my specific configuration.
  What would be an effective way to approach the situation?
 
  In the sources I see that an instance of MonadIO GHC.Ghc does exist. I
  don't understand these errors. Are there multiple different MonadIO
  classes in different modules?
 
  Thank you and happy hacking.
 
  Now the errors:
 
  Control/Monad/Ghc.hs:42:15:
  No instance for (GHC.MonadIO Ghc)
arising from the 'deriving' clause of a data type declaration
  Possible fix:
use a standalone 'deriving instance' declaration,
  so you can specify the instance context yourself
  When deriving the instance for (GHC.ExceptionMonad Ghc)
 
  Control/Monad/Ghc.hs:46:15:
  No instance for (MonadIO GHC.Ghc)
arising from the 'deriving' clause of a data type declaration
  Possible fix:
use a standalone 'deriving instance' declaration,
  so you can specify the instance context yourself
  When deriving the instance for (MonadIO Ghc)
 
  Control/Monad/Ghc.hs:49:15:
  No instance for (GHC.MonadIO Ghc)
arising from the 'deriving' clause of a data type declaration
  Possible fix:
use a standalone 'deriving instance' declaration,
  so you can specify the instance context yourself
  When deriving the instance for (GHC.GhcMonad Ghc)
 
 
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-12 Thread Edward Z . Yang
Excerpts from Brandon Simmons's message of 2014-05-10 13:57:40 -0700:
 Another silly question: when card-marking happens after a write or
 CAS, does that indicate this segment maybe contains old-to-new
 generation references, so be sure to preserve (scavenge?) them from
 collection ? In my initial question I was thinking of the cards as
 indicating here be garbage (e.g. a previous overwritten array
 value), but I think I had the wrong idea about how copying GC works
 generally (shouldn't it really be called Non-Garbage Preservation?).

That's correct.

Cheers,
Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-09 Thread Edward Z . Yang
Hello Brandon,

Excerpts from Brandon Simmons's message of 2014-05-08 16:18:48 -0700:
 I have an unusual application with some unusual performance problems
 and I'm trying to understand how I might use unsafeFreezeArray to help
 me, as well as understand in detail what's going on with boxed mutable
 arrays and GC. I'm using the interface from 'primitive' below.
 
 First some basic questions, then a bit more background
 
 1) What happens when I do `newArray s x = \a- unsafeFreezeArray a
  return a` and then use `a`? What problems could that cause?

Your code as written wouldn't compile, but assuming you're talking about
the primops newArray# and unsafeFreezeArray#, what this operation does
is allocate a new array of pointers (initially recorded as mutable), and
then freezes it in-place (by changing the info-table associated with
it), but while maintaining a pointer to the original mutable array.  Nothing bad
will happen immediately, but if you use this mutable reference to mutate
the pointer array, you can cause a crash (in particular, if the array
makes it to the old generation, it will not be on the mutable list and
so if you mutate it, you may be missing roots.)

 2) And what if a do a `cloneMutableArray` on `a` and likewise use the
 resulting array?

If you do the clone before freezing, that's fine for all use-cases;
if you do the clone after, you will end up with the same result as (1).

 Background: I've been looking into an issue [1] in a library in which
 as more mutable arrays are allocated, GC dominates (I think I verified
 this?) and all code gets slower in proportion to the number of mutable
 arrays that are hanging around.
 
 I've been trying to understand how this is working internally. I don't
 quite understand how the remembered set works with respect to
 MutableArray. As best I understand: the remembered set in generation G
 points to certain objects in older generations, which objects hold
 references to objects in G. Then for MutableArrays specifically,
 card-marking is used to mark regions of the array with garbage in some
 way.
 
 So my hypothesis is the slowdown is associated with the size of the
 remembered set, and whatever the GC has to do on it. And in my tests,
 freezing the array seems to make that overhead (at least the overhead
 proportional to number of arrays) disappear.

You're basically correct.  In the current GC design, mutable arrays of
pointers are always placed on the mutable list.  The mutable list of
generations which are not being collected are always traversed; thus,
the number of pointer arrays corresponds to a linear overhead for minor GCs.

Here is a feature request tracking many of the infelicities that our
current GC design has:  https://ghc.haskell.org/trac/ghc/ticket/7662
The upshot is that the Haskell GC is very nicely tuned for mostly
immutable workloads, but there are some bad asymptotics when your
heap has lots of mutable objects.  This is generally a hard problem:
tuned GC implementations for mutable languages are a lot of work!
(Just ask the JVM implementors.)

 Now I'm really lost in the woods though. My hope is that I might be
 able to safely use unsafeFreezeArray to help me here [3]. Here are the
 particulars of how I use MutableArray in my algorithm, which are
 somewhat unusual:
   - keep around a small template `MutableArray Nothing`
   - use cloneMutableArray for fast allocation of new arrays
   - for each array only a *single* write (CAS actually) happens at each 
 position
 
 In fact as far as I can reason, there ought to be no garbage to
 collect at all until the entire array becomes garbage (the initial
 value is surely shared, especially since I'm keeping this template
 array around to clone from, right?). In fact I was even playing with
 the idea of rolling a new CAS that skips the card-marking stuff.

I don't understand your full workload, but if you have a workload that
involves creating an array, mutating it over a short period of time,
and then never mutating it afterwards, you should simply freeze it after
you are done writing it.  Once frozen, the array will no longer be kept
on the mutable list and you won't pay for it when doing GC.  However,
the fact that you are doing a CAS makes it seem to me that your workflow
may be more complicated than that...

Cheers,
Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using mutable array after an unsafeFreezeArray, and GC details

2014-05-09 Thread Edward Z . Yang
Excerpts from Carter Schonwald's message of 2014-05-09 16:49:07 -0700:
 Any chance you could try to use storable or unboxed vectors?

Neither of those will work if, at the end of the day, you need to
store pointers to heap objects

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: memory ordering

2013-12-31 Thread Edward Z . Yang
I was thinking about my response, and realized there was one major
misleading thing in my description.  The load reordering I described
applies to load instructions in C-- proper, i.e. things that show up
in the C-- dup as:

W_ x = I64[...addr...]

Reads to IORefs and reads to vectors get compiled inline (as they
eventually translate into inline primops), so my admonitions are
applicable.

However, the story with *foreign primops* (which is how loadLoadBarrier
in atomic-primops is defined, how you might imagine defining a custom
read function as a primop) is a little different.  First, what does a
call to an foreign primop compile into? It is *not* inlined, so it will
eventually get compiled into a jump (this could be a problem if you're
really trying to squeeze out performance!)  Second, the optimizer is a
bit more conservative when it comes to primop calls (internally referred
to as unsafe foreign calls); at the moment, the optimizer assumes
these foreign calls clobber heap memory, so we *automatically* will not
push loads/stores beyond this boundary. (NB: We reserve the right to
change this in the future!)

This is probably why atomic-primops, as it is written today, seems to
work OK, even in the presence of the optimizer.  But I also have a hard
time believing it gives the speedups you want, due to the current
design. (CC'd Ryan Newton, because I would love to be wrong here, and
maybe he can correct me on this note.)

Cheers,
Edward

P.S. loadLoadBarrier compiles to a no-op on x86 architectures, but
because it's not inlined I think you will still end up with a jump (LLVM
might be able to eliminate it).

Excerpts from John Lato's message of 2013-12-31 03:01:58 +0800:
 Hi Edward,
 
 Thanks very much for this reply, it answers a lot of questions I'd had.
  I'd hoped that ordering would be preserved through C--, but c'est la vie.
  Optimizing compilers are ever the bane of concurrent algorithms!
 
 stg/SMP.h does define a loadLoadBarrier, which is exposed in Ryan Newton's
 atomic-primops package.  From the docs, I think that's a general read
 barrier, and should do what I want.  Assuming it works properly, of course.
  If I'm lucky it might even be optimized out.
 
 Thanks,
 John
 
 On Mon, Dec 30, 2013 at 6:04 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  Hello John,
 
  Here are some prior discussions (which I will attempt to summarize
  below):
 
  http://www.haskell.org/pipermail/haskell-cafe/2011-May/091878.html
  http://www.haskell.org/pipermail/haskell-prime/2006-April/001237.html
  http://www.haskell.org/pipermail/haskell-prime/2006-March/001079.html
 
  The guarantees that Haskell and GHC give in this area are hand-wavy at
  best; at the moment, I don't think Haskell or GHC have a formal memory
  model—this seems to be an open research problem. (Unfortunately, AFAICT
  all the researchers working on relaxed memory models have their hands
  full with things like C++ :-)
 
  If you want to go ahead and build something that /just/ works for a
  /specific version/ of GHC, you will need to answer this question
  separately for every phase of the compiler.  For Core and STG, monads
  will preserve ordering, so there is no trouble.  However, for C--, we
  will almost certainly apply optimizations which reorder reads (look at
  CmmSink.hs).  To properly support your algorithm, you will have to add
  some new read barrier mach-ops, and teach the optimizer to respect them.
  (This could be fiendishly subtle; it might be better to give C-- a
  memory model first.)  These mach-ops would then translate into
  appropriate arch-specific assembly or LLVM instructions, preserving
  the guarantees further.
 
  This is not related to your original question, but the situation is a
  bit better with regards to reordering stores: we have a WriteBarrier
  MachOp, which in principle, prevents store reordering.  In practice, we
  don't seem to actually have any C-- optimizations that reorder stores.
  So, at least you can assume these will work OK!
 
  Hope this helps (and is not too inaccurate),
  Edward
 
  Excerpts from John Lato's message of 2013-12-20 09:36:11 +0800:
   Hello,
  
   I'm working on a lock-free algorithm that's meant to be used in a
   concurrent setting, and I've run into a possible issue.
  
   The crux of the matter is that a particular function needs to perform the
   following:
  
x - MVector.read vec ix
position - readIORef posRef
  
   and the algorithm is only safe if these two reads are not reordered (both
   the vector and IORef are written to by other threads).
  
   My concern is, according to standard Haskell semantics this should be
  safe,
   as IO sequencing should guarantee that the reads happen in-order.  Of
   course this also relies upon the architecture's memory model, but x86
  also
   guarantees that reads happen in order.  However doubts remain; I do not
   have confidence that the code generator will handle this properly.  In
   particular, LLVM may

Re: memory ordering

2013-12-31 Thread Edward Z . Yang
 Second, the optimizer is a bit more conservative when it comes to
 primop calls (internally referred to as unsafe foreign calls)

Sorry, I need to correct myself here.  Foreign primops, and most
out-of-line primops, compile into jumps which end basic blocks, which
constitute hard boundaries since we don't do really do inter-block
optimization.  Unsafe foreign calls are generally reserved for function
calls which use the C calling convention; primops manage the return
convention themselves.

Edward
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: memory ordering

2013-12-30 Thread Edward Z . Yang
Hello John,

Here are some prior discussions (which I will attempt to summarize
below):

http://www.haskell.org/pipermail/haskell-cafe/2011-May/091878.html
http://www.haskell.org/pipermail/haskell-prime/2006-April/001237.html
http://www.haskell.org/pipermail/haskell-prime/2006-March/001079.html

The guarantees that Haskell and GHC give in this area are hand-wavy at
best; at the moment, I don't think Haskell or GHC have a formal memory
model—this seems to be an open research problem. (Unfortunately, AFAICT
all the researchers working on relaxed memory models have their hands
full with things like C++ :-)

If you want to go ahead and build something that /just/ works for a
/specific version/ of GHC, you will need to answer this question
separately for every phase of the compiler.  For Core and STG, monads
will preserve ordering, so there is no trouble.  However, for C--, we
will almost certainly apply optimizations which reorder reads (look at
CmmSink.hs).  To properly support your algorithm, you will have to add
some new read barrier mach-ops, and teach the optimizer to respect them.
(This could be fiendishly subtle; it might be better to give C-- a
memory model first.)  These mach-ops would then translate into
appropriate arch-specific assembly or LLVM instructions, preserving
the guarantees further.

This is not related to your original question, but the situation is a
bit better with regards to reordering stores: we have a WriteBarrier
MachOp, which in principle, prevents store reordering.  In practice, we
don't seem to actually have any C-- optimizations that reorder stores.
So, at least you can assume these will work OK!

Hope this helps (and is not too inaccurate),
Edward

Excerpts from John Lato's message of 2013-12-20 09:36:11 +0800:
 Hello,
 
 I'm working on a lock-free algorithm that's meant to be used in a
 concurrent setting, and I've run into a possible issue.
 
 The crux of the matter is that a particular function needs to perform the
 following:
 
  x - MVector.read vec ix
  position - readIORef posRef
 
 and the algorithm is only safe if these two reads are not reordered (both
 the vector and IORef are written to by other threads).
 
 My concern is, according to standard Haskell semantics this should be safe,
 as IO sequencing should guarantee that the reads happen in-order.  Of
 course this also relies upon the architecture's memory model, but x86 also
 guarantees that reads happen in order.  However doubts remain; I do not
 have confidence that the code generator will handle this properly.  In
 particular, LLVM may freely re-order loads of NotAtomic and Unordered
 values.
 
 The one hope I have is that ghc will preserve IO semantics through the
 entire pipeline.  This seems like it would be necessary for proper handling
 of exceptions, for example.  So, can anyone tell me if my worries are
 unfounded, or if there's any way to ensure the behavior I want?  I could
 change the readIORef to an atomicModifyIORef, which should issue an mfence,
 but that seems a bit heavy-handed as just a read fence would be sufficient
 (although even that seems more than necessary).
 
 Thanks,
 John L.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: blocking parallel program

2013-10-19 Thread Edward Z. Yang
Oh I see; the problem is the GHC RTS is attempting to shut down,
and in order to do this it needs to grab all of the capabilities. However,
one of them is in an uninterruptible loop, so the program hangs (e.g.
if you change the program as follows:

main :: IO ()
main = do
  forkIO $ do
loop (y == [yield])
  threadDelay 1000
)

With a sufficiently recent version of GHC, if you compile with -fno-omit-yields,
that should fix the problem.

Edward

Excerpts from Facundo Domínguez's message of Sat Oct 19 16:05:15 -0700 2013:
 Thanks. I just tried that. Unfortunately, it doesn't seem to help.
 
 Facundo
 
 On Sat, Oct 19, 2013 at 8:47 PM, Edward Z. Yang ezy...@mit.edu wrote:
  Hello Facundo,
 
  The reason is that you have compiled the program to be multithreaded, but it
  is not running with multiple cores. Compile also with -rtsopts and then
  pass +RTS -N2 to the program.
 
  Excerpts from Facundo Domínguez's message of Sat Oct 19 15:19:22 -0700 2013:
  Hello,
 Below is a program that seems to block indefinitely with ghc in a
  multicore machine. This program has a loop that does not produce
  allocations, and I understand that this may grab one of the cores. The
  question is, why can't the other cores take the blocked thread?
 
  The program was compiled with:
 
  $ ghc --make -O -threaded test.hs
 
  and it is run with:
 
  $ ./test
 
  Program text follows.
 
  Thanks,
  Facundo
 
  
 
  import Control.Concurrent
  import Control.Monad
  import System.Environment
 
  main :: IO ()
  main = do
y - getArgs
mv0 - newEmptyMVar
mv1 - newEmptyMVar
forkIO $ do
  takeMVar mv0
  putMVar mv1 ()
  loop (y == [yield])
putMVar mv0 ()
takeMVar mv1
 
  loop :: Bool - IO ()
  loop cooperative = go
where
  go = when cooperative yield  go
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Trying to compile ghc HEAD on xubuntu 13.04-x64

2013-10-04 Thread Edward Z. Yang
As a workaround, add this to your mk/build.mk

HADDOCK_DOCS   = NO
BUILD_DOCBOOK_HTML = NO
BUILD_DOCBOOK_PS   = NO
BUILD_DOCBOOK_PDF  = NO

This is a bug.

Edward

Excerpts from Nathan Hüsken's message of Fri Oct 04 13:55:01 -0700 2013:
 Hey,
 
 because I have touble with ghci and packages with FFI, it was suggested 
 to me to compile and use ghc HEAD.
 
 I am on xubuntu 13.04 64bit and try to do a perf build. It fails with:
 
 compiler/ghc.mk:478: warning: ignoring old commands for target 
 `compiler/stage2/build/libHSghc-7.7.20131004-ghc7.7.20131004.so'
 /home/ls/src/ghc/inplace/bin/haddock 
 --odir=libraries/ghc-prim/dist-install/doc/html/ghc-prim 
 --no-tmp-comp-dir 
 --dump-interface=libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock
  
 --html --hoogle --title=ghc-prim-0.3.1.0: GHC primitives 
 --prologue=libraries/ghc-prim/dist-install/haddock-prologue.txt 
 --optghc=-hisuf --optghc=dyn_hi --optghc=-osuf --optghc=dyn_o 
 --optghc=-hcsuf --optghc=dyn_hc --optghc=-fPIC --optghc=-dynamic 
 --optghc=-O --optghc=-H64m --optghc=-package-name 
 --optghc=ghc-prim-0.3.1.0 --optghc=-hide-all-packages --optghc=-i 
 --optghc=-ilibraries/ghc-prim/. 
 --optghc=-ilibraries/ghc-prim/dist-install/build 
 --optghc=-ilibraries/ghc-prim/dist-install/build/autogen 
 --optghc=-Ilibraries/ghc-prim/dist-install/build 
 --optghc=-Ilibraries/ghc-prim/dist-install/build/autogen 
 --optghc=-Ilibraries/ghc-prim/. --optghc=-optP-include 
 --optghc=-optPlibraries/ghc-prim/dist-install/build/autogen/cabal_macros.h 
 --optghc=-package --optghc=rts-1.0 --optghc=-package-name 
 --optghc=ghc-prim --optghc=-XHaskell98 --optghc=-XCPP 
 --optghc=-XMagicHash --optghc=-XForeignFunctionInterface 
 --optghc=-XUnliftedFFITypes --optghc=-XUnboxedTuples 
 --optghc=-XEmptyDataDecls --optghc=-XNoImplicitPrelude --optghc=-O2 
 --optghc=-no-user-package-db --optghc=-rtsopts --optghc=-odir 
 --optghc=libraries/ghc-prim/dist-install/build --optghc=-hidir 
 --optghc=libraries/ghc-prim/dist-install/build --optghc=-stubdir 
 --optghc=libraries/ghc-prim/dist-install/build 
 libraries/ghc-prim/./GHC/Classes.hs  libraries/ghc-prim/./GHC/CString.hs 
   libraries/ghc-prim/./GHC/Debug.hs  libraries/ghc-prim/./GHC/Magic.hs 
 libraries/ghc-prim/dist-install/build/GHC/PrimopWrappers.hs 
 libraries/ghc-prim/./GHC/IntWord64.hs  libraries/ghc-prim/./GHC/Tuple.hs 
   libraries/ghc-prim/./GHC/Types.hs 
 libraries/ghc-prim/dist-install/build/autogen/GHC/Prim.hs +RTS 
 -tlibraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock.t 
 --machine-readable
 Haddock coverage:
   100% (  1 /  1) in 'GHC.IntWord64'
80% (  8 / 10) in 'GHC.Types'
17% (  1 /  6) in 'GHC.CString'
 3% (  2 / 63) in 'GHC.Tuple'
 0% (  0 /  3) in 'GHC.Debug'
 0% (  0 /366) in 'GHC.PrimopWrappers'
72% (813 /1132) in 'GHC.Prim'
   100% (  3 /  3) in 'GHC.Magic'
38% (  6 / 16) in 'GHC.Classes'
 haddock: internal error: haddock: panic! (the 'impossible' happened)
(GHC version 7.7.20131004 for x86_64-unknown-linux):
 Static flags have not been initialised!
  Please call GHC.parseStaticFlags early enough.
 
 Please report this as a GHC bug:  http://www.haskell.org/ghc/reportabug
 
 make[1]: *** 
 [libraries/ghc-prim/dist-install/doc/html/ghc-prim/ghc-prim.haddock] Error 1
 make: *** [all] Error 2
 
 Suggestions?
 Thanks!
 Nathan
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-14 Thread Edward Z. Yang
Actually, the situation is pretty bad on Windows, where dynamic-too
does not work.

Edward

Excerpts from Edward Z. Yang's message of Mon Sep 09 16:29:38 -0700 2013:
 Erm, I forgot to mention that profiling would only be enabled if
 the user asked for it.
 
 Yes, we will be producing two sets of objects by default. This is what
 the -dynamic-too flag is for, no?  I suppose you could try to compile
 your static executables using -fPIC, but that would negate the performance
 considerations why we haven't just switched to dynamic for everything.
 
 Edward
 
 Excerpts from Johan Tibell's message of Mon Sep 09 16:15:45 -0700 2013:
  That sounds terrible expensive to do on every `cabal build` and its a
  cost most users won't understand (what was broken before?).
  
  On Mon, Sep 9, 2013 at 4:06 PM, Edward Z. Yang ezy...@mit.edu wrote:
   If I am building some Haskell executable using 'cabal build', the
   result should be *statically linked* by default.
  
   However, subtly, if I am building a Haskell library, I would like to
   be able to load the compiled version into GHCi.
  
   So it seems to me cabal should produce v, dyn (libs only, not final
   executable) and p ways by default (but not dyn_p).
  
   Edward
  
   Excerpts from Kazu Yamamoto (山本和彦)'s message of Mon Sep 09 15:37:10 -0700 
   2013:
   Hi,
  
Kazu (or someone else), can you please file a ticket on the Cabal bug
tracker [1] if you think that this a Cabal bug?
  
   I'm not completely sure yet.
  
   GHCi 7.8 uses dynamic linking. This is true.
  
   So, what is a consensus for GHC 7.8 and cabal-install 1.18? Are they
   supposed to use dynamic linking? Or, static linking?
  
   If dynamic linking is used, GHC should provide dynamic libraries for
   profiling.
  
   If static linking is used, cabal-install should stop using dynamic
   libraries for profiling.
  
   And of course, I can make a ticket when I'm convinced.
  
   P.S.
  
   Since doctest uses GHCi internally, I might misunderstand GHC 7.8
   uses dynamic linking. Anyway, I don't understand what is right yet.
  
   --Kazu
  
  
   ___
   ghc-devs mailing list
   ghc-d...@haskell.org
   http://www.haskell.org/mailman/listinfo/ghc-devs
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
Excerpts from Kazu Yamamoto (山本和彦)'s message of Sun Sep 08 19:36:19 -0700 2013:
 
 % make show VALUE=GhcLibWays
 make -r --no-print-directory -f ghc.mk show
 GhcLibWays=v p dyn
 

Yes, it looks like you are missing p_dyn from this list. I think
this is a bug in the build system.  When I look at ghc.mk
it only verifies that the p way is present, not p_dyn; and I don't
see any knobs which turn on p_dyn.

However, I must admit to being a little confused; didn't we abandon
dynamic by default and switch to only using dynamic for GHCi (in which
case the profiling libraries ought not to matter)?

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
 I think Kazu is saying that when he builds something with profiling 
 using cabal-install, it fails because cabal-install tries to build a 
 dynamic version too.  We don't want dyanmic/profiled libraries (there's 
 no point, you can't load them into GHCi).  Perhaps this is something 
 that needs fixing in cabal-install?

Agreed, sounds like a Cabal install bug.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
Hello Mikhail,

It is a known issue that Template Haskell does not work with profiling (because
GHCi and profiling do not work together, and TH uses GHCi's linker). [1] 
Actually,
with the new linker patches that are landing soon we are not too far off from
having this work.

Edward

[1] http://ghc.haskell.org/trac/ghc/ticket/4837

Excerpts from Mikhail Glushenkov's message of Mon Sep 09 14:15:54 -0700 2013:
 Hi,
 
 On Mon, Sep 9, 2013 at 10:11 PM, Simon Marlow marlo...@gmail.com wrote:
 
  I think Kazu is saying that when he builds something with profiling using
  cabal-install, it fails because cabal-install tries to build a dynamic
  version too.  We don't want dyanmic/profiled libraries (there's no point,
  you can't load them into GHCi).  Perhaps this is something that needs fixing
  in cabal-install?
 
 Aren't they needed when compiling libraries that are using Template
 Haskell for profiling? The issue sounds like it could be TH-related.
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
If I am building some Haskell executable using 'cabal build', the
result should be *statically linked* by default.

However, subtly, if I am building a Haskell library, I would like to
be able to load the compiled version into GHCi.

So it seems to me cabal should produce v, dyn (libs only, not final
executable) and p ways by default (but not dyn_p).

Edward

Excerpts from Kazu Yamamoto (山本和彦)'s message of Mon Sep 09 15:37:10 -0700 2013:
 Hi,
 
  Kazu (or someone else), can you please file a ticket on the Cabal bug
  tracker [1] if you think that this a Cabal bug?
 
 I'm not completely sure yet.
 
 GHCi 7.8 uses dynamic linking. This is true.
 
 So, what is a consensus for GHC 7.8 and cabal-install 1.18? Are they
 supposed to use dynamic linking? Or, static linking?
 
 If dynamic linking is used, GHC should provide dynamic libraries for
 profiling.
 
 If static linking is used, cabal-install should stop using dynamic
 libraries for profiling.
 
 And of course, I can make a ticket when I'm convinced.
 
 P.S.
 
 Since doctest uses GHCi internally, I might misunderstand GHC 7.8
 uses dynamic linking. Anyway, I don't understand what is right yet.
 
 --Kazu
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: 7.8 Release Update

2013-09-09 Thread Edward Z. Yang
Erm, I forgot to mention that profiling would only be enabled if
the user asked for it.

Yes, we will be producing two sets of objects by default. This is what
the -dynamic-too flag is for, no?  I suppose you could try to compile
your static executables using -fPIC, but that would negate the performance
considerations why we haven't just switched to dynamic for everything.

Edward

Excerpts from Johan Tibell's message of Mon Sep 09 16:15:45 -0700 2013:
 That sounds terrible expensive to do on every `cabal build` and its a
 cost most users won't understand (what was broken before?).
 
 On Mon, Sep 9, 2013 at 4:06 PM, Edward Z. Yang ezy...@mit.edu wrote:
  If I am building some Haskell executable using 'cabal build', the
  result should be *statically linked* by default.
 
  However, subtly, if I am building a Haskell library, I would like to
  be able to load the compiled version into GHCi.
 
  So it seems to me cabal should produce v, dyn (libs only, not final
  executable) and p ways by default (but not dyn_p).
 
  Edward
 
  Excerpts from Kazu Yamamoto (山本和彦)'s message of Mon Sep 09 15:37:10 -0700 
  2013:
  Hi,
 
   Kazu (or someone else), can you please file a ticket on the Cabal bug
   tracker [1] if you think that this a Cabal bug?
 
  I'm not completely sure yet.
 
  GHCi 7.8 uses dynamic linking. This is true.
 
  So, what is a consensus for GHC 7.8 and cabal-install 1.18? Are they
  supposed to use dynamic linking? Or, static linking?
 
  If dynamic linking is used, GHC should provide dynamic libraries for
  profiling.
 
  If static linking is used, cabal-install should stop using dynamic
  libraries for profiling.
 
  And of course, I can make a ticket when I'm convinced.
 
  P.S.
 
  Since doctest uses GHCi internally, I might misunderstand GHC 7.8
  uses dynamic linking. Anyway, I don't understand what is right yet.
 
  --Kazu
 
 
  ___
  ghc-devs mailing list
  ghc-d...@haskell.org
  http://www.haskell.org/mailman/listinfo/ghc-devs

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: executable stack flag

2013-07-09 Thread Edward Z. Yang
I took a look at the logs and none mentioned 'Hey, so it turns out
we need executable stack for this', and as recently as Sep 17, 2011
there are patches for turning off executable stack (courtesy Gentoo).  So 
probably it
is just a regression, someone added some code which didn't turn off
executable stacks...

Edward

Excerpts from Jens Petersen's message of Mon Jul 08 21:36:42 -0700 2013:
 Hi,
 
 We noticed [1] in Fedora that ghc (7.4 and 7.6) are linking executables
 (again [2]) with the executable stack flag set. I haven't starting looking
 at the ghc code yet but wanted to ask first if it is intentional/necessary?
  (ghc-7.0 doesn't seem to do this.) Having the flag set is considered a bit
 of a security risk so it would be better if all generated executable did
 not have it set.
 
 I did some very basic testing of various executables, clearing their
 flags [3] and they all seemed to run ok without the executable stack flag
 set but I can't claim to have tested very exhaustively. (I thought perhaps
 it might be related to TemplateHaskell for example but even those
 executables seem to work, though I am sure I have not exercised all the
 code paths.)
 
 Does someone know the current status of this?
 Will anything break if the flag is not set?
 Is it easy to patch ghc to not set the flag?
 Does it only affect the NCG backend?
 
 Thanks, Jens
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=973512
 [2] http://ghc.haskell.org/trac/ghc/ticket/703
 [3] using execstack -c

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: executable stack flag

2013-07-09 Thread Edward Z. Yang
I've gone ahead and fixed it, and referenced the patches in the ticket.

Cheers,
Edward

Excerpts from Jens Petersen's message of Mon Jul 08 21:36:42 -0700 2013:
 Hi,
 
 We noticed [1] in Fedora that ghc (7.4 and 7.6) are linking executables
 (again [2]) with the executable stack flag set. I haven't starting looking
 at the ghc code yet but wanted to ask first if it is intentional/necessary?
  (ghc-7.0 doesn't seem to do this.) Having the flag set is considered a bit
 of a security risk so it would be better if all generated executable did
 not have it set.
 
 I did some very basic testing of various executables, clearing their
 flags [3] and they all seemed to run ok without the executable stack flag
 set but I can't claim to have tested very exhaustively. (I thought perhaps
 it might be related to TemplateHaskell for example but even those
 executables seem to work, though I am sure I have not exercised all the
 code paths.)
 
 Does someone know the current status of this?
 Will anything break if the flag is not set?
 Is it easy to patch ghc to not set the flag?
 Does it only affect the NCG backend?
 
 Thanks, Jens
 
 [1] https://bugzilla.redhat.com/show_bug.cgi?id=973512
 [2] http://ghc.haskell.org/trac/ghc/ticket/703
 [3] using execstack -c

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Why is GHC so much worse than JHC when computing the Ackermann function?

2013-04-20 Thread Edward Z. Yang
I don't seem to get the leak on latest GHC head.  Running the program
in GC debug mode in 7.6.2 is quite telling; the program is allocating
*a lot* of megablocks.  We probably fixed it though?

Edward

Excerpts from Mikhail Glushenkov's message of Sat Apr 20 01:55:10 -0700 2013:
 Hi all,
 
 This came up on StackOverflow [1]. When compiled with GHC (7.4.2 
 7.6.2), this simple program:
 
 main = print $ ack 4 1
   where ack :: Int - Int - Int
 ack 0 n = n+1
 ack m 0 = ack (m-1) 1
 ack m n = ack (m-1) (ack m (n-1))
 
 consumes all available memory on my machine and slows down to a crawl.
 However, when compiled with JHC it runs in constant space and is about
 as fast as the straightforward Ocaml version (see the SO question for
 benchmark numbers).
 
 I was able to fix the space leak by using CPS-conversion, but the
 CPS-converted version is still about 10 times slower than the naive
 version compiled with JHC.
 
 I looked both at the Core and Cmm, but couldn't find anything
 obviously wrong with the generated code - 'ack' is compiled to a
 simple loop of type 'Int# - Int# - Int#'. What's more frustrating is
 that running the program with +RTS -hc makes the space leak
 mysteriously vanish.
 
 Can someone please explain where the space leak comes from and if it's
 possible to further improve the runtime of this program with GHC?
 Apparently it's somehow connected to the stack management strategy,
 since running the program with a larger stack chunk size (+RTS -kc1M)
 makes the space leak go away. Interestingly, choosing smaller stack
 chunk sizes (256K, 512K) causes it to die with an OOM exception:
 
 $ time ./Test +RTS -kc256K
 Test: out of memory (requested 2097152 bytes)
 
 
 [1] 
 http://stackoverflow.com/questions/16115815/ackermann-very-inefficient-with-haskell-ghc/16116074#16116074
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: mask, catch, myThreadId, throwTo

2013-04-16 Thread Edward Z. Yang
OK, I've updated the docus.

Excerpts from Felipe Almeida Lessa's message of Mon Apr 15 13:34:50 -0700 2013:
 Thanks a lot, you're correct!  The trouble is, I was misguided by the
 Interruptible operations note [1] which states that
 
 The following operations are guaranteed not to be interruptible:
 ... * everything from Control.Exception ...
 
 Well, it seems that not everything from Control.Exception fits the bill.
 
 Thanks, =)
 
 [1] 
 http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#g:14
 
 On Mon, Apr 15, 2013 at 5:25 PM, Bertram Felgenhauer
 bertram.felgenha...@googlemail.com wrote:
  Felipe Almeida Lessa wrote:
  I have some code that is not behaving the way I thought it should.
 
  The gist of it is
 
sleeper =
  mask_ $
  forkIOWithUnmask $ \restore -
forever $
  restore sleep `catch` throwBack
 
throwBack (Ping tid) = myThreadId = throwTo tid . Pong
throwBack (Pong tid) = myThreadId = throwTo tid . Ping
 
  Since (a) throwBack is executed on a masked state, (b) myThreadId is
  uninterruptible, and (c) throwTo is uninterruptible, my understanding
  is that the sleeper thread should catch all PingPong exceptions and
  never let any one of them through.
 
  (c) is wrong, throwTo may block, and blocking operations are interruptible.
 

  http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#v:throwTo
 
  explains this in some more detail.
 
  The simplest way that throwTo can actually block in your program, as
  far as I can see, and one that will only affect the threaded RTS, is
  if the sleeper thread and whichever thread is running the other
  throwBack are executing on different capabilities; this will always
  cause throwTo to block. (You could try looking at a ghc event log to
  find out more.)
 
  I last ran into trouble like that with System.Timeout.timeout; for
  that function I finally convinced myself that uninterruptibleMask
  is the only way to avoid such problems; then throwTo will not be
  interrupted by exceptions even when it blocks. Maybe this is the
  solution for your problem, too.
 
  Hope that helps,
 
  Bertram
 
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: mask, catch, myThreadId, throwTo

2013-04-15 Thread Edward Z. Yang
Sounds like those docs need to be fixed, in that case.

Edward

Excerpts from Felipe Almeida Lessa's message of Mon Apr 15 13:34:50 -0700 2013:
 Thanks a lot, you're correct!  The trouble is, I was misguided by the
 Interruptible operations note [1] which states that
 
 The following operations are guaranteed not to be interruptible:
 ... * everything from Control.Exception ...
 
 Well, it seems that not everything from Control.Exception fits the bill.
 
 Thanks, =)
 
 [1] 
 http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#g:14
 
 On Mon, Apr 15, 2013 at 5:25 PM, Bertram Felgenhauer
 bertram.felgenha...@googlemail.com wrote:
  Felipe Almeida Lessa wrote:
  I have some code that is not behaving the way I thought it should.
 
  The gist of it is
 
sleeper =
  mask_ $
  forkIOWithUnmask $ \restore -
forever $
  restore sleep `catch` throwBack
 
throwBack (Ping tid) = myThreadId = throwTo tid . Pong
throwBack (Pong tid) = myThreadId = throwTo tid . Ping
 
  Since (a) throwBack is executed on a masked state, (b) myThreadId is
  uninterruptible, and (c) throwTo is uninterruptible, my understanding
  is that the sleeper thread should catch all PingPong exceptions and
  never let any one of them through.
 
  (c) is wrong, throwTo may block, and blocking operations are interruptible.
 

  http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Exception.html#v:throwTo
 
  explains this in some more detail.
 
  The simplest way that throwTo can actually block in your program, as
  far as I can see, and one that will only affect the threaded RTS, is
  if the sleeper thread and whichever thread is running the other
  throwBack are executing on different capabilities; this will always
  cause throwTo to block. (You could try looking at a ghc event log to
  find out more.)
 
  I last ran into trouble like that with System.Timeout.timeout; for
  that function I finally convinced myself that uninterruptibleMask
  is the only way to avoid such problems; then throwTo will not be
  interrupted by exceptions even when it blocks. Maybe this is the
  solution for your problem, too.
 
  Hope that helps,
 
  Bertram
 
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Foreign.StablePtr: nullPtr double-free questions.

2013-03-13 Thread Edward Z. Yang
Excerpts from Remi Turk's message of Wed Mar 13 13:09:18 -0700 2013:
 Thanks for your quick reply. Could you elaborate on what a bit of
 overhead means?
 As a bit of context, I'm working on a small library for working with
 (im)mutable extendable
 tuples/records based on Storable and ForeignPtr, and I'm using
 StablePtr's as back-references
 to Haskell-land. Would you expect StablePtr's to have serious
 performance implications
 in such a scenario compared to, say, an IORef?

Yes, they will. Every stable pointer that is active has to be stuffed
into a giant array, and the entire array must be traversed during every
GC.  See also: http://hackage.haskell.org/trac/ghc/ticket/7670

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: runghc -fdefer-type-errors

2013-03-11 Thread Edward Z. Yang
Excerpts from Simon Peyton-Jones's message of Mon Mar 11 16:04:31 -0700 2013:
 Aha.  It is indeed true that
 
 ghc -fdefer-type-errors -w
 
 does not suppress the warnings that arise from the type errors; indeed there 
 is no current way to do so.  How to do that?
 
 To be kosher there should really be a flag to switch off those warnings 
 alone, perhaps
 -fno-warn-type-errors
 
 So then -fwarn-type-errors is on by default, but is only relevant when 
 -fdefer-type-errors is on.  Once -fdefer-type-errors is on, 
 -fno-warn-type-errors and -fwarn-type-errors suppress or enable the warnings. 
  -w would then include -fno-warn-type-errors.
 
 Is that a design everyone would like?  If so, woudl someone like to open a 
 ticket, implement it, update the documentation, and send a patch?

SGTM.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Foreign.StablePtr: nullPtr double-free questions.

2013-03-08 Thread Edward Z. Yang
Excerpts from Remi Turk's message of Fri Mar 08 18:28:56 -0800 2013:
 Good night everyone,
 
 I have two questions with regards to some details of the
 Foreign.StablePtr module. [1]
 
 1) The documentation suggests, but does not explicitly state, that
   castStablePtrToPtr `liftM` newStablePtr x
 will never yield a nullPtr. Is this guaranteed to be the case or not?
 It would conveniently allow me to store a Maybe for free, using
 nullPtr for Nothing, but I am hesitant about relying on something that
 isn't actually guaranteed by the documentation.

No, you cannot assume that.  In fact, stable pointer zero is
base_GHCziTopHandler_runIO_info:

ezyang@javelin:~/Dev/haskell$ cat sptr.hs
import Foreign.StablePtr
import Foreign.Ptr

main = do
let x = castPtrToStablePtr nullPtr
freeStablePtr x
ezyang@javelin:~/Dev/haskell$ ~/Dev/ghc-build-tick/inplace/bin/ghc-stage2 
--make sptr.hs -debug 
[1 of 1] Compiling Main ( sptr.hs, sptr.o )
Linking sptr ...
ezyang@javelin:~/Dev/haskell$ gdb ./sptr
GNU gdb (GDB) 7.5-ubuntu
Copyright (C) 2012 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /srv/code/haskell/sptr...done.
(gdb) b freeStablePtrUnsafe
Breakpoint 1 at 0x73f8a7: file rts/Stable.c, line 263.
(gdb) r
Starting program: /srv/code/haskell/sptr 
[Thread debugging using libthread_db enabled]
Using host libthread_db library /lib/x86_64-linux-gnu/libthread_db.so.1.

Breakpoint 1, freeStablePtrUnsafe (sp=0x0) at rts/Stable.c:263
263 ASSERT((StgWord)sp  SPT_size);
(gdb) list 
258 }
259 
260 void
261 freeStablePtrUnsafe(StgStablePtr sp)
262 {
263 ASSERT((StgWord)sp  SPT_size);
264 freeSpEntry(stable_ptr_table[(StgWord)sp]);
265 }
266 
267 void
(gdb) p stable_ptr_table[(StgWord)sp]
$1 = {addr = 0x9d38e0}
(gdb) p *(StgClosure*)stable_ptr_table[(StgWord)sp]
$2 = {header = {info = 0x4e89c8 base_GHCziTopHandler_runIO_info}, payload 
= 0x9d38e8}

Regardless, you don't want to do that anyway, because stable pointers
have a bit of overhead.

 2) If I read the documentation correctly, when using StablePtr it is
 actually quite difficult to avoid undefined behaviour, at least in
 GHC(i). In particular, a double-free on a StablePtr yields undefined
 behaviour. However, when called twice on the same value, newStablePtr
 yields the same StablePtr in GHC(i).
 E.g.:
 
 module Main where
 
 import Foreign
 
 foo x y = do
 p1 - newStablePtr x
 p2 - newStablePtr y
 print $ castStablePtrToPtr p1 == castStablePtrToPtr p2
 freeStablePtr p1
 freeStablePtr p2 -- potential double free!
 
 main = let x = Hello, world! in foo x x -- undefined behaviour!
 
 prints True under GHC(i), False from Hugs. Considering that foo
 and main might be in different packages written by different authors,
 this makes correct use rather complicated. Is this behaviour (and the
 consequential undefinedness) intentional?

I think this bug was inadvertently fixed in the latest version of GHC;
see:

commit 7e7a4e4d7e9e84b2c57d3d55e372e738b5f8dbf5
Author: Simon Marlow marlo...@gmail.com
Date:   Thu Feb 14 08:46:55 2013 +

Separate StablePtr and StableName tables (#7674)

To improve performance of StablePtr.

Cheers,
Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


RE: Cloud Haskell and network latency issues with -threaded

2013-02-07 Thread Edward Z. Yang
Hey folks,

The latency changes sound relevant to some work on the scheduler I'm doing;
is there a place I can see the changes?

Thanks,
Edward

Excerpts from Simon Peyton-Jones's message of Wed Feb 06 10:10:10 -0800 2013:
 I (with help from Kazu and helpful comments from Bryan and Johan) have nearly 
 completed an overhaul to the IO manager based on my observations and we are 
 in the final stages of getting it into GHC
 
 This is really helpful. Thank you very much Andreas, Kazu, Bryan, Johan.
 
 Simon
 
 From: parallel-hask...@googlegroups.com 
 [mailto:parallel-hask...@googlegroups.com] On Behalf Of Andreas Voellmy
 Sent: 06 February 2013 14:28
 To: watson.timo...@gmail.com
 Cc: kosti...@gmail.com; parallel-haskell; glasgow-haskell-users@haskell.org
 Subject: Re: Cloud Haskell and network latency issues with -threaded
 
 Hi all,
 
 I haven't followed the conversations around CloudHaskell closely, but I 
 noticed the discussion around latency using the threaded runtime system, and 
 I thought I'd jump in here.
 
 I've been developing a server in Haskell that serves hundreds to thousands of 
 clients over very long-lived TCP sockets. I also had latency problems with 
 GHC. For example, with 100 clients I had a 10 ms (millisecond) latency and 
 with 500 clients I had a 29ms latency. I looked into the problem and found 
 that some bottlenecks in the threaded IO manager were the cause. I made some 
 hacks there and got the latency for 100 and 500 clients down to under 0.2 ms. 
 I (with help from Kazu and helpful comments from Bryan and Johan) have nearly 
 completed an overhaul to the IO manager based on my observations and we are 
 in the final stages of getting it into GHC. Hopefully our work will also fix 
 the latency issues in CloudHaskell programs :)
 
 It would be very helpful if someone has some benchmark CloudHaskell 
 applications and workloads to test with. Does anyone have these handy?
 
 Cheers,
 Andi
 
 On Wed, Feb 6, 2013 at 9:09 AM, Tim Watson 
 watson.timo...@gmail.commailto:watson.timo...@gmail.com wrote:
 Hi Kostirya,
 
 I'm putting the parallel-haskell and ghc-users lists on cc, just in case 
 other (better informed) folks want to chip in here.
 
 
 
 First of all, I'm assuming you're talking about network latency when 
 compiling with -threaded - if not I apologise for misunderstanding!
 
 There is apparently an outstanding network latency issue when compiling with 
 -threaded, but according to a conversation I had with the other developers on 
 #haskell-distributed, this is not something that's specific to Cloud Haskell. 
 It is something to do with the threaded runtime system, so would need to be 
 solved for GHC (or is it just the Network package!?) in general. Writing up a 
 simple C program and equivalent socket use in Haskell and comparing the 
 latency using -threaded will show this up.
 
 See the latency section in 
 http://haskell-distributed.github.com/wiki/networktransport.html for some 
 more details. According to that, there *are* some things we might be able to 
 do, but the 20% latency isn't going to change significantly on the face of 
 things.
 
 We have an open ticket to look into this 
 (https://cloud-haskell.atlassian.net/browse/NTTCP-4) and at some point we'll 
 try and put together the sample programs in a github repository (if that's 
 not already done - I might've missed previous spikes done by Edsko or others) 
 and investigate further.
 
 One of the other (more experienced!) devs might be able to chip in and 
 proffer a better explanation.
 
 Cheers,
 Tim
 
 On 6 Feb 2013, at 13:27, kosti...@gmail.commailto:kosti...@gmail.com wrote:
 
  Haven't you had a necessity to launch Haskell in no-threaded mode during 
  the intense network data exchange?
  I am getting the double performance penalty in threaded mode. But I must 
  use threaded mode because epoll and kevent are available in the threaded 
  mode only.
 
 
 [snip]
 
 
 
  среда, 6 февраля 2013 г., 12:33:36 UTC+2 пользователь Tim Watson написал:
  Hello all,
 
  It's been a busy week for Cloud Haskell and I wanted to share a few of
  our news items with you all.
 
  Firstly, we have a new home page at http://haskell-distributed.github.com,
  into which most of the documentation and wiki pages have been merged. Making
  sassy looking websites is not really my bag, so I'm very grateful to the
  various author's whose Creative Commons licensed designs and layouts made
  it easy to put together. We've already had some pull requests to fix minor
  problems on the site, so thanks very much to those who've contributed 
  already!
 
  As well as the new site, you will find a few of us hanging out on the
  #haskell-distributed channel on freenode. Please do come along and join in
  the conversation.
 
  We also recently split up the distributed-process project into separate
  git repositories, one for each component that makes up Cloud Haskell. This
  was done partly for administrative purposes and partly 

Re: Cloud Haskell and network latency issues with -threaded

2013-02-07 Thread Edward Z. Yang
OK. I think it is high priority for us to get some latency benchmarks
into nofib so that GHC devs (including me) can start measuring changes
off them.  I know Edsko has some benchmarks here:
http://www.edsko.net/2013/02/06/performance-problems-with-threaded/
but they depend on network which makes it a little difficult to move into nofib.
I'm working on other scheduler changes that may help you guys out; we
should keep each other updated.

I noticed your patch also incorporates the make yield actually work patch;
do you think the improvement in 7.4.1 was due to that specific change?
(Have you instrumented the run queues and checked how your patch changes
the distribution of jobs over your runtime?)

Somewhat unrelatedly, if you have some good latency tests already,
it may be worth a try compiling your copy of GHC -fno-omit-yields, so that
forced context switches get serviced more predictably.

Cheers,
Edward

Excerpts from Andreas Voellmy's message of Thu Feb 07 21:20:25 -0800 2013:
 Hi Edward,
 
 I did two things to improve latency for my application: (1) rework the IO
 manager and (2) stabilize the work pushing. (1) seems like a big win and we
 are almost done with the work on that part. It is less clear whether (2)
 will generally help much. It helped me when I developed it against 7.4.1,
 but it doesn't seem to have much impact on HEAD on the few measurements I
 did. The idea of (2) was to keep running averages of the run queue length
 of each capability, then push work when these running averages get too
 out-of-balance. The desired effect (which seems to work on my particular
 application) is to avoid cases in which threads are pushed back and forth
 among cores, which may make cache usage worse. You can see my patch here:
 https://github.com/AndreasVoellmy/ghc-arv/commits/push-work-exchange-squashed
 .
 
 -Andi
 
 On Fri, Feb 8, 2013 at 12:10 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  Hey folks,
 
  The latency changes sound relevant to some work on the scheduler I'm doing;
  is there a place I can see the changes?
 
  Thanks,
  Edward
 
  Excerpts from Simon Peyton-Jones's message of Wed Feb 06 10:10:10 -0800
  2013:
   I (with help from Kazu and helpful comments from Bryan and Johan) have
  nearly completed an overhaul to the IO manager based on my observations and
  we are in the final stages of getting it into GHC
  
   This is really helpful. Thank you very much Andreas, Kazu, Bryan, Johan.
  
   Simon
  
   From: parallel-hask...@googlegroups.com [mailto:
  parallel-hask...@googlegroups.com] On Behalf Of Andreas Voellmy
   Sent: 06 February 2013 14:28
   To: watson.timo...@gmail.com
   Cc: kosti...@gmail.com; parallel-haskell;
  glasgow-haskell-users@haskell.org
   Subject: Re: Cloud Haskell and network latency issues with -threaded
  
   Hi all,
  
   I haven't followed the conversations around CloudHaskell closely, but I
  noticed the discussion around latency using the threaded runtime system,
  and I thought I'd jump in here.
  
   I've been developing a server in Haskell that serves hundreds to
  thousands of clients over very long-lived TCP sockets. I also had latency
  problems with GHC. For example, with 100 clients I had a 10 ms
  (millisecond) latency and with 500 clients I had a 29ms latency. I looked
  into the problem and found that some bottlenecks in the threaded IO manager
  were the cause. I made some hacks there and got the latency for 100 and 500
  clients down to under 0.2 ms. I (with help from Kazu and helpful comments
  from Bryan and Johan) have nearly completed an overhaul to the IO manager
  based on my observations and we are in the final stages of getting it into
  GHC. Hopefully our work will also fix the latency issues in CloudHaskell
  programs :)
  
   It would be very helpful if someone has some benchmark CloudHaskell
  applications and workloads to test with. Does anyone have these handy?
  
   Cheers,
   Andi
  
   On Wed, Feb 6, 2013 at 9:09 AM, Tim Watson watson.timo...@gmail.com
  mailto:watson.timo...@gmail.com wrote:
   Hi Kostirya,
  
   I'm putting the parallel-haskell and ghc-users lists on cc, just in case
  other (better informed) folks want to chip in here.
  
   
  
   First of all, I'm assuming you're talking about network latency when
  compiling with -threaded - if not I apologise for misunderstanding!
  
   There is apparently an outstanding network latency issue when compiling
  with -threaded, but according to a conversation I had with the other
  developers on #haskell-distributed, this is not something that's specific
  to Cloud Haskell. It is something to do with the threaded runtime system,
  so would need to be solved for GHC (or is it just the Network package!?) in
  general. Writing up a simple C program and equivalent socket use in Haskell
  and comparing the latency using -threaded will show this up.
  
   See the latency section in
  http://haskell-distributed.github.com/wiki/networktransport.html for some
  more details

Re: What is the scheduler type of GHC?

2013-01-16 Thread Edward Z. Yang
Excerpts from Magicloud Magiclouds's message of Wed Jan 16 00:32:00 -0800 2013:
 Hi,
   Just read a post about schedulers in erlang and go lang, which informed
 me that erlang is preemptive and go lang is cooperative.
   So which is used by GHC? From ghc wiki about rts, if the question is only
 within haskell threads, it seems like cooperative.

Additionally, the current scheduler is round-robin with some heuristics for
when threads get to cut the line, so we do not have priorities for threads.
I'm currently working on a patch which allows for more flexible scheduling.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Hoopl vs LLVM?

2012-12-10 Thread Edward Z. Yang
Hello Greg,

Hoopl passes live in compiler/cmm; searching for DataflowLattice will
turn up lattice definitions which are the core of the analyses and rewrites.
Unfortunately, the number of true Hoopl optimizations was somewhat reduced
when Simon Marlow did aggressive performance optimizations to get the
new code generator shipped with GHC by default, but I think we hope to
add some more interesting passes for -O3, etc.

Hoopl and LLVM's approaches to optimization are quite different.  LLVM
uses SSA representation, whereas Hoopl uses the Chamber-Lerner-Grove
algorithm to do analyses without requiring single-assignment.  The other
barrier you're likely to run into is the fact that GHC generated C-- code
looks very different from conventional compiler output.

Hope that helps,
Edward

Excerpts from Greg Fitzgerald's message of Mon Dec 10 14:24:02 -0800 2012:
 I don't know my way around the GHC source tree.  How can I get the list of
 optimizations implemented with Hoopl?  Is there overlap with LLVM's
 optimization passes?  If so, has anyone compared the implementations at
 all?  Should one group be stealing ideas from the other?  Or apples and
 oranges?
 
 Thanks,
 Greg

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using DeepSeq for exception ordering

2012-11-08 Thread Edward Z. Yang
It looks like the optimizer is getting confused when the value being
evaluated is an IO action (nota bene: 'evaluate m' where m :: IO a
is pretty odd, as far as things go). File a bug?

Cheers,
Edward

Excerpts from Albert Y. C. Lai's message of Thu Nov 08 10:04:15 -0800 2012:
 On 12-11-08 01:01 PM, Nicolas Frisby wrote:
  And the important observation is: all of them throw A if interpreted in
  ghci or compiled without -O, right?
 
 Yes.
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Using DeepSeq for exception ordering

2012-11-07 Thread Edward Z. Yang
Hello Simon,

I think the confusion here is focused on what exactly it is that
the NFData class offers:

class NFData a where
rnf :: a - ()

rnf can be thought of a function which produces a thunk (for unit)
which, when forced, fully evaluates the function.  With this in hand,
it's pretty clear how to use evaluate to enforce ordering:

evaluate (rnf ('a': throw exceptionA))

One could imagine defining:

deepSeqEvaluate :: NFData a = a - IO ()
deepSeqEvaluate = evaluate . rnf

In general, the right way to think about the semantics here is to
distinguish between evaluation as an explicit effect (evaluate) and
evaluation as a side effect of running IO (when you x `seq` return ()).
They're distinct, and the latter doesn't give you ordering guarantees.
This applies even when DeepSeq is involved.

Cheers,
Edward

Excerpts from Simon Hengel's message of Wed Nov 07 05:49:21 -0800 2012:
 Hi,
 I'm puzzled whether it is feasible to use existing NFData instances for
 exception ordering.
 
 Here is some code that won't work:
 
 return $!! 'a' : throw exceptionA
 throwIO exceptionB
 
 Here GHC makes a non-deterministic choice between exceptionA and
 exceptionB.  The reason is that the standard DeepSeq instances use
 `seq`, and `seq` does not help with exception ordering**.
 
 I tried several things (ghc-7.4.2 with -O2), and the following seems to
 order the exceptions for this particular case:
 
 (evaluate . force) ('a' : throw exceptionA)
 throwIO exceptionB
 
 But I'm a little bit worried that this may not hold in general, e.g.
 
 (return $!! 'a' : throw exceptionA) = evaluate
 throwIO exceptionB
 
 results in exceptionB.  I think my main issue here is that I do not
 properly understand how seq and seq# (which is used by evaluate) do
 interact with each other.  And how I can reason about code that uses
 both.
 
 The question is really whether it is somehow feasible to use existing
 NFData instances to order exceptions.  Or would we need to define a
 separate type class + instances for that, e.g.:
 
 class DeepEvaluate a where
   deepEvaluate :: a - IO a
   deepEvaluate = evaluate
 
 instance DeepEvaluate Char where
 
 instance DeepEvaluate a = DeepEvaluate [a] where
   deepEvaluate = mapM deepEvaluate
 
 If you have any related ideas or thoughts, I'd love to hear about them.
 
 Cheers,
 Simon
 
 ** This is desired behavior, see the discussion at
http://hackage.haskell.org/trac/ghc/ticket/5129
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [GHC Users] Dictionary sharing

2012-06-29 Thread Edward Z. Yang
Hello Jonas,

Like other top-level definitions, these instances are considered CAFs
(constant applicative forms), so these instances will in fact usually
be evaluated only once per type X.

import System.IO.Unsafe
class C a where
dflt :: a
instance C Int where
dflt = unsafePerformIO (putStrLn bang  return 2)
main = do
print (dflt :: Int)
print (dflt :: Int)
print (dflt :: Int)

ezyang@javelin:~/Dev/haskell$ ./caf
bang
2
2
2

Cheers,
Edward

Excerpts from Jonas Almström Duregård's message of Fri Jun 29 07:25:42 -0400 
2012:
 Hi,
 
 Is there a way to ensure that functions in a class instance are
 treated as top level definitions and not re-evaluated?
 
 For instance if I have this:
 
 class C a where
   list :: [a]
 
 instance List a = List [a] where
   list = permutations list
 
 How can I ensure that list :: [[X]] is evaluated at most once for any
 type X (throughout my program)?
 
 I assume this is potentially harmful, since list can never be garbage
 collected and there may exist an unbounded number of X's.
 
 I currently have a solution that uses Typeable to memoise the result
 of the function based on its type. Is there an easier way?
 
 Regards,
 Jonas
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [GHC Users] Dictionary sharing

2012-06-29 Thread Edward Z. Yang
I say usually because while I believe this to be true for the
current implementation of GHC, I don't think we necessary give
this operational guarantee.

But yes, your real problem is that there is a world of difference between
functions and non-functions.  You will need to use one of the usual tricks for
memoising functions, or forgo using a function altogether and lean on laziness.

Edward

Excerpts from Jonas Almström Duregård's message of Fri Jun 29 11:21:46 -0400 
2012:
 Thank you for your response Edward,
 
 You write that it is usually only evaluated once, do you know the
 circumstances under which it is evaluated more than once? I have some
 examples of this but they are all very large.
 
 The real issue I was having was actually not with a list but with a
 memoised function i.e. something like:
 
 class C a where
   memoised :: Int - a
 
 
 Perhaps functions are treated differently?
 
 Regards,
 Jonas
 
 On 29 June 2012 15:55, Edward Z. Yang ezy...@mit.edu wrote:
  Hello Jonas,
 
  Like other top-level definitions, these instances are considered CAFs
  (constant applicative forms), so these instances will in fact usually
  be evaluated only once per type X.
 
     import System.IO.Unsafe
     class C a where
         dflt :: a
     instance C Int where
         dflt = unsafePerformIO (putStrLn bang  return 2)
     main = do
         print (dflt :: Int)
         print (dflt :: Int)
         print (dflt :: Int)
 
  ezyang@javelin:~/Dev/haskell$ ./caf
  bang
  2
  2
  2
 
  Cheers,
  Edward
 
  Excerpts from Jonas Almström Duregård's message of Fri Jun 29 07:25:42 
  -0400 2012:
  Hi,
 
  Is there a way to ensure that functions in a class instance are
  treated as top level definitions and not re-evaluated?
 
  For instance if I have this:
  
  class C a where
    list :: [a]
 
  instance List a = List [a] where
    list = permutations list
  
  How can I ensure that list :: [[X]] is evaluated at most once for any
  type X (throughout my program)?
 
  I assume this is potentially harmful, since list can never be garbage
  collected and there may exist an unbounded number of X's.
 
  I currently have a solution that uses Typeable to memoise the result
  of the function based on its type. Is there an easier way?
 
  Regards,
  Jonas
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Weird behavior of the NonTermination exception

2012-05-03 Thread Edward Z. Yang
Excerpts from Bas van Dijk's message of Thu May 03 11:10:38 -0400 2012:
 As can be seen, the putMVar is executed successfully. So why do I get
 the message: thread blocked indefinitely in an MVar operation?

GHC will send BlockedIndefinitelyOnMVar to all threads involved
in the deadlock, so it's not unusual that this can interact with
error handlers to cause the system to become undeadlocked.

http://blog.ezyang.com/2011/07/blockedindefinitelyonmvar/

However, I must admit I am a bit confused as for the timing of
the thrown exceptions.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: containing memory-consuming computations

2012-04-20 Thread Edward Z. Yang
So, it would be pretty interesting if we could have an ST s style
mechanism, where the data structure is not allowed to escape.
But I wonder if this would be too cumbersome for anyone to use.

Edward

Excerpts from Simon Marlow's message of Fri Apr 20 06:07:20 -0400 2012:
 On 19/04/2012 11:45, Herbert Valerio Riedel wrote:
 
   For the time-dimension, I'm already using functions such as
   System.Timeout.timeout which I can use to make sure that even a (forced)
   pure computation doesn't require (significantly) more wall-clock time
   than I expect it to.
 
 Note that timeout uses wall-clock time, but you're really interested in 
 CPU time (presumably).  If there are other threads running, then using 
 timeout will not do what you want.
 
 You could track allocation and CPU usage per thread, but note that 
 laziness could present a problem: if a thread evaluates a lazy 
 computation created by another thread, it will be charged to the thread 
 that evaluated it, not the thread that created it.  To get around this 
 you would need to use the profiling system, which tracks costs 
 independently of lazy evaluation.
 
 On 19/04/2012 17:04, Herbert Valerio Riedel wrote:
 
  At least this seems easier than needing a per-computation or
  per-IO-thread caps.
 
  How hard would per-IO-thread caps be?
 
 For tracking memory use, which I think is what you're asking for, it 
 would be quite hard.  One problem is sharing: when a data structure is 
 shared between multiple threads, which one should it be charged to?  Both?
 
 To calculate the amount of memory use per thread you would need to run 
 the GC multiple times, once per thread, and observe how much data is 
 reachable.  I can't think of any fundamental difficulties with doing 
 that, but it could be quite expensive.  There might be some tricky 
 interactions with the reachability property of threads themselves: a 
 blocked thread is only reachable if the object it is blocked on is also 
 reachable.
 
 Cheers,
 Simon
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: containing memory-consuming computations

2012-04-20 Thread Edward Z. Yang
Excerpts from Brandon Allbery's message of Fri Apr 20 19:31:54 -0400 2012:
  So, it would be pretty interesting if we could have an ST s style
  mechanism, where the data structure is not allowed to escape.
  But I wonder if this would be too cumbersome for anyone to use.
 
 Isn't this what monadic regions are for?

That's right!  But we have a hard enough time convincing people it's
worth it, just for file handles.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Invariants for GHC.Event ensureIOManagerIsRunning

2012-04-13 Thread Edward Z. Yang
Hello all,

I recently ran into a rather reproduceable bug where I would
get this error from the event manager:

/dev/null: hClose: user error (Pattern match failure in do expression at 
libraries/base/System/Event/Thread.hs:83:3-10)

The program was doing some rather strange things:

- It was running the Haskell RTS inside another system (Urweb)
  which was making use of pthreads, sockets, etc.

- The Haskell portion was linked against the threaded RTS, and doing
  communication with a process.

and is rather complicated (two compilers are involved).  But
the gist of the matter is that if I added a quick call to
ensureIOManagerIsRunning after hs_init, the error went away.

So, if the IO manager is not eagerly loaded at the call to hs_init,
how do we decided when it should be loaded?  It seems probably that
we missed a case.

Edward

P.S. I tried reproducing on a simple test case but couldn't manage it.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Interpreting the strictness annotations output by ghc --show-iface

2012-03-07 Thread Edward Z. Yang
Check out compiler/basicTypes/Demand.lhs

Cheers,
Edward

Excerpts from Johan Tibell's message of Wed Mar 07 18:21:56 -0500 2012:
 Hi,
 
 If someone could clearly specify the exact interpretation of these
 LLSL(ULL) strictness/demand annotations shown by ghc --show-iface I'd
 like to try to write a little tool that highlights the function
 argument binding in an IDE (e.g. Emacs) with this information. Anyone
 care to explain the syntax?
 
 Cheers,
 Johan
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Interpreting the strictness annotations output by ghc --show-iface

2012-03-07 Thread Edward Z. Yang
This is the important bit of code in the file:

instance Outputable Demand where
ppr Top  = char 'T'
ppr Abs  = char 'A'
ppr Bot  = char 'B'

ppr (Defer ds)  = char 'D'  ppr ds
ppr (Eval ds)   = char 'U'  ppr ds

ppr (Box (Eval ds)) = char 'S'  ppr ds
ppr (Box Abs)   = char 'L'
ppr (Box Bot)   = char 'X'
ppr d@(Box _)   = pprPanic ppr: Bad boxed demand (ppr d)

ppr (Call d)= char 'C'  parens (ppr d)


instance Outputable Demands where
ppr (Poly Abs) = empty
ppr (Poly d)   = parens (ppr d  char '*')
ppr (Prod ds)  = parens (hcat (map ppr ds))

You do need to be able to read the pretty printing combinators. Here's a quick
cheat sheet; check 
http://hackage.haskell.org/packages/archive/pretty/1.0.1.0/doc/html/Text-PrettyPrint-HughesPJ.html
the basic idea.

char == print a single character
   == concatenate without adding a space
parens x == put parentheses around x
hcat == concatenate a list without adding a space

Cheers,
Edward

Excerpts from Johan Tibell's message of Wed Mar 07 18:41:42 -0500 2012:
 Edward, I have looked at that file before and it didn't make me much
 wiser, because I cannot map it to the output.
 
 I find it's the parenthesis that confuses me the most. What does this mean?
 
 C(U(LU(L)))
 
 what about this?
 
 U(SLLAA)LL
 
 -- Johan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Interpreting the strictness annotations output by ghc --show-iface

2012-03-07 Thread Edward Z. Yang
Arguably, what should happen is we redo the format for machine-parseability
and use that in future versions of GHC.

Edward

Excerpts from Johan Tibell's message of Wed Mar 07 23:38:06 -0500 2012:
 Thanks Edward. I'll try to summarize this in human readable form and
 publish it on the wiki.
 
 -- Johan

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Is it true that an exception is always terminates the thread?

2012-01-23 Thread Edward Z. Yang
Excerpts from Heka Treep's message of Mon Jan 23 13:56:47 -0500 2012:
 adding the message queue (with Chan, MVar or STM) for each process will not
 help in this kind of imitation.

Why not? Instead of returning a thread ID, send the write end of a Chan
which the thread is waiting on.  You can send messages (normal or
errors) using it.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Is it true that an exception is always terminates the thread?

2012-01-23 Thread Edward Z. Yang
Excerpts from Heka Treep's message of Mon Jan 23 15:11:51 -0500 2012:
 
 import Control.Monad.STM
 import Control.Concurrent
 import Control.Concurrent.STM.TChan
 
 spawn f = do
   mbox - newTChanIO
   forkIO $ f mbox
   return mbox
 
 (!) = writeTChan
 
 actor mbox = do
   empty - atomically $ isEmptyTChan mbox
   if empty
 then actor mbox
 else do
   val - atomically $ readTChan mbox

Uh, don't you want to combine isEmptyChan and readTChan into
one single atomic action?

   putStrLn val
   actor mbox
 
 test = do
   mbox - spawn actor
   atomically $ mbox ! 1
   atomically $ mbox ! 2
   atomically $ mbox ! 3
 
 --  test
 -- 1
 -- 2
 -- 3
 
 
 But there are several problems:
 
 * The @actor@ function is busy checking the channel all the time.

GHC's runtime system is clever. It will block appropriately.

 * Caller and callee need to perform synchronizations (for the @Chan@)
 or atomically transactions (for the @TChan@).

The synchronization for Chan is very cheap, and you would have needed
to synchronize anyway in Erlang (Erlang message queues are not lock free!)

Cheers,
Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Is it true that an exception is always terminates the thread?

2012-01-23 Thread Edward Z. Yang
Excerpts from Heka Treep's message of Mon Jan 23 16:20:51 -0500 2012:
 actor :: TChan String - IO ()
 actor mbox = forever $ do
   putStrLn call to actor...
   msg - atomically $ do
 isEmpty - isEmptyTChan mbox
 if isEmpty then return Nothing else readTChan mbox = return . Just
   when (isJust msg) $ putStrLn $ fromJust msg

There are several things wrong with this:

- You're only synchronizing over one variable: use an Chan, not a TChan
  (if you make that change, btw, it will automatically start working.)

- You don't want the transaction to succeed in all cases; you only want
  the transaction to succeed if you manage to get something from the
  TChan.  What you've done is convert the interface from blocking
  to non-blocking, but in a busy-loop.

- You don't even need to run isEmptyTChan. Just do a readTChan. It will
  block if there is nothing in the chan (technically, it will ask the
  STM transaction to retry, but STM is clever enough not to try running
  again unless any of the touched STM variables changes.)

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Straight-line single assignment in C--

2012-01-21 Thread Edward Z. Yang
Excerpts from Edward Z. Yang's message of Fri Jan 20 23:44:02 -0500 2012:
 If multiple assignment is rare enough in straight line code, I might
 be able to take the conservative approach and just say
 
 a - used multiple times
 
 Which I don't think will cause any problems in the inlining step later.
 But I don't have a good sense for this.

I realized the answer to my question, which is YES.  In particular, the
spill-reload step will result in this happening. Back to the drawing board...

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Runtime performance degradation for multi-threaded C FFI callback

2012-01-20 Thread Edward Z. Yang
Hello Sanket,

What happens if you run this experiment with 5 threads in the C function,
and have GHC run RTS with -N7? (e.g. five C threads + seven GHC threads = 12
threads on your 12-core box.)

Edward

Excerpts from Sanket Agrawal's message of Tue Jan 17 23:31:38 -0500 2012:
 I posted this issue on StackOverflow today. A brief recap:
 
  In the case when C FFI calls back a Haskell function, I have observed
 sharp increase in total time when multi-threading is enabled in C code
 (even when total number of function calls to Haskell remain same). In my
 test, I called a Haskell function 5M times using two scenarios (GHC 7.0.4,
 RHEL5, 12-core box):
 
 
- Single-threaded C function: call back Haskell function 5M times -
Total time 1.32s
- 5 threads in C function: each thread calls back the Haskell function 1M
times - so, total is still 5M - Total time 7.79s - Verified that pthread
didn't contribute much to the overhead by having the same code call a C
function instead, and compared with single-threaded version. So, almost all
of the increase in overhead seems to come from GHC runtime.
 
 What I want to ask is if this is a known issue for GHC runtime? If not,  I
 will file a bug report for GHC team with code to reproduce it. I don't want
 to file a duplicate bug report if this is already known issue. I searched
 through GHC trac using some keywords but didn't see any bugs related to it.
 
 StackOverflow post link (has code and details on how to reproduce the
 issue):
 http://stackoverflow.com/questions/8902568/runtime-performance-degradation-for-c-ffi-callback-when-pthreads-are-enabled

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Straight-line single assignment in C--

2012-01-20 Thread Edward Z. Yang
Hello all,

I was wondering if the following style of register assignment ever
shows up in C-- code generated by GHC:

a = R1
I32[a] = 1
a = R2
I32[a] = 2

That is to say, there are two disjoint live ranges of a: we could rename
all instances of a in the first and second lines to something different
without affecting the semantics (although maybe affecting register allocation
and stack spilling.)

asdf = R1
I32[asdf] = 1
a = R2
I32[a] = 2

But C-- is not in SSA form, and even when no loops are involved it seems
that some naughty code generator could reuse a variable.  This is
pretty inconvenient, because I want to record this information

a = R1 // used only once
I32[a] = 1
a = R2 // used only once
I32[a] = 2

in a map:

a - used only once

But since the register names go all the same place this may end in tears.

If multiple assignment is rare enough in straight line code, I might
be able to take the conservative approach and just say

a - used multiple times

Which I don't think will cause any problems in the inlining step later.
But I don't have a good sense for this.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Runtime performance degradation for multi-threaded C FFI callback

2012-01-17 Thread Edward Z. Yang
Hmm, this kind of sounds like GHC is assuming that it has control over
all of the threads, and when this assumption fails bad things happen.
(We use lightweight threads, and use the operating system threads that
map to pthreads sparingly.)  I'm sure Simon Marlow could give a more accurate
assessment, however.

Edward

Excerpts from Sanket Agrawal's message of Tue Jan 17 23:31:38 -0500 2012:
 I posted this issue on StackOverflow today. A brief recap:
 
  In the case when C FFI calls back a Haskell function, I have observed
 sharp increase in total time when multi-threading is enabled in C code
 (even when total number of function calls to Haskell remain same). In my
 test, I called a Haskell function 5M times using two scenarios (GHC 7.0.4,
 RHEL5, 12-core box):
 
 
- Single-threaded C function: call back Haskell function 5M times -
Total time 1.32s
- 5 threads in C function: each thread calls back the Haskell function 1M
times - so, total is still 5M - Total time 7.79s - Verified that pthread
didn't contribute much to the overhead by having the same code call a C
function instead, and compared with single-threaded version. So, almost all
of the increase in overhead seems to come from GHC runtime.
 
 What I want to ask is if this is a known issue for GHC runtime? If not,  I
 will file a bug report for GHC team with code to reproduce it. I don't want
 to file a duplicate bug report if this is already known issue. I searched
 through GHC trac using some keywords but didn't see any bugs related to it.
 
 StackOverflow post link (has code and details on how to reproduce the
 issue):
 http://stackoverflow.com/questions/8902568/runtime-performance-degradation-for-c-ffi-callback-when-pthreads-are-enabled

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: log time instead of linear for case matching

2011-10-09 Thread Edward Z. Yang
Excerpts from Greg Weber's message of Sun Oct 09 12:39:03 -0400 2011:
 So first of all I am wondering if a sum type comparison does in fact scale
 linearly or if there are optimizations in place to make the lookup constant
 or logarithmic. Second, I as wondering (for the routing case) if Haskell can
 make a string case comparison logarithmic so that users can use case
 comparisons instead of maps for string collections that are known at compile
 time and won't change.

GHC will compile case-matches over large sum types into jump tables,
so the lookup becomes constant.  I don't think we have any cleverness for
strings.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cheap and cheerful partial evaluation

2011-08-24 Thread Edward Z. Yang
I think it would be a pretty interesting project. :^)

Edward

Excerpts from Ryan Newton's message of Wed Aug 24 15:18:48 -0400 2011:
 Ah, and there's no core-haskell facility presently?  Thanks.
 
 On Wed, Aug 24, 2011 at 12:14 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  Since most of GHC's optimizations occur on core, not the user-friendly
  frontend language, doing so would be probably be nontrivial (e.g.
  we'd want some sort of core to Haskell decompiler.)
 
  Edward
 
  Excerpts from Ryan Newton's message of Tue Aug 23 13:46:45 -0400 2011:
   Edward,
  
   On first glance at your email I misunderstood you as asking about using
   GHC's optimizer as a source-to-source operation (using GHC as an
  optimizer,
   retrieving partially evaluated Haskell code).  That's not what you were
   asking for -- but is it possible?
  
 -Ryan
  
   P.S.   One compiler that comes to mind that exposes this kind of thing
   nicely is Chez Scheme ( http://scheme.com/ ).  In Chez you can get your
   hands on cp0 which does a source to source transform (aka compiler pass
   zero, after macro expansion), and could use cp0 to preprocess the source
  and
   then print it back out.
  
   On Mon, Aug 22, 2011 at 8:48 AM, Edward Z. Yang ezy...@mit.edu wrote:
  
I think this ticket sums it up very nicely!
   
Cheers,
Edward
   
Excerpts from Max Bolingbroke's message of Mon Aug 22 04:07:59 -0400
  2011:
 On 21 August 2011 19:20, Edward Z. Yang ezy...@mit.edu wrote:
  And no sooner do I send this email do I realize we have 'inline'
built-in,
  so I can probably experiment with this right now...

 You may be interested in my related ticket #5029:
 http://hackage.haskell.org/trac/ghc/ticket/5059

 I don't think this is totally implausible but you have to be very
 careful with recursive functions.

 Max
   
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
   
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cheap and cheerful partial evaluation

2011-08-23 Thread Edward Z. Yang
Since most of GHC's optimizations occur on core, not the user-friendly
frontend language, doing so would be probably be nontrivial (e.g.
we'd want some sort of core to Haskell decompiler.)

Edward

Excerpts from Ryan Newton's message of Tue Aug 23 13:46:45 -0400 2011:
 Edward,
 
 On first glance at your email I misunderstood you as asking about using
 GHC's optimizer as a source-to-source operation (using GHC as an optimizer,
 retrieving partially evaluated Haskell code).  That's not what you were
 asking for -- but is it possible?
 
   -Ryan
 
 P.S.   One compiler that comes to mind that exposes this kind of thing
 nicely is Chez Scheme ( http://scheme.com/ ).  In Chez you can get your
 hands on cp0 which does a source to source transform (aka compiler pass
 zero, after macro expansion), and could use cp0 to preprocess the source and
 then print it back out.
 
 On Mon, Aug 22, 2011 at 8:48 AM, Edward Z. Yang ezy...@mit.edu wrote:
 
  I think this ticket sums it up very nicely!
 
  Cheers,
  Edward
 
  Excerpts from Max Bolingbroke's message of Mon Aug 22 04:07:59 -0400 2011:
   On 21 August 2011 19:20, Edward Z. Yang ezy...@mit.edu wrote:
And no sooner do I send this email do I realize we have 'inline'
  built-in,
so I can probably experiment with this right now...
  
   You may be interested in my related ticket #5029:
   http://hackage.haskell.org/trac/ghc/ticket/5059
  
   I don't think this is totally implausible but you have to be very
   careful with recursive functions.
  
   Max
 
  ___
  Glasgow-haskell-users mailing list
  Glasgow-haskell-users@haskell.org
  http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cheap and cheerful partial evaluation

2011-08-22 Thread Edward Z. Yang
I think this ticket sums it up very nicely!

Cheers,
Edward

Excerpts from Max Bolingbroke's message of Mon Aug 22 04:07:59 -0400 2011:
 On 21 August 2011 19:20, Edward Z. Yang ezy...@mit.edu wrote:
  And no sooner do I send this email do I realize we have 'inline' built-in,
  so I can probably experiment with this right now...
 
 You may be interested in my related ticket #5029:
 http://hackage.haskell.org/trac/ghc/ticket/5059
 
 I don't think this is totally implausible but you have to be very
 careful with recursive functions.
 
 Max

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Cheap and cheerful partial evaluation

2011-08-21 Thread Edward Z. Yang
And no sooner do I send this email do I realize we have 'inline' built-in,
so I can probably experiment with this right now...

Edward

Excerpts from Edward Z. Yang's message of Sun Aug 21 14:18:23 -0400 2011:
 Hello all,
 
 It occurred to me that it might not be too difficult to use GHC's
 optimization passes as a cheap and cheerful partial evaluator.
 
 Consider some function we would like to partially evaluate:
 
 f = g h
 
 Partial evaluation proceeds as follows: calculate the type of f,
 inline and specialize g, inline and specialize h, and then optimize.
 Effectively, laser-guided inlining.
 
 With this (very) heavy hammer, we can, for example, solve the problem posed in
 http://hackage.haskell.org/trac/ghc/ticket/1349, simply by ensuring all of our
 strict functions are partially evaluated on the continuation handler
 appropriately.  (This is not ideal, since we ought to be able to share the
 strict worker/wrapper between instances, but might be a reasonable stop-gap 
 for
 some use cases.)
 
 So, am I completely insane, or does this seem plausible and easy enough
 to implement to make it into GHC?
 
 Cheers,
 Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: What are the preconditions of newArray#

2011-08-21 Thread Edward Z. Yang
stg_newArrayzh in rts/PrimOps.cmm doesn't appear to give any indication,
so this might be a good patch to add.  But I'm curious: what would
allocating Array#s of size 0 do? Null pointers? That sounds dangerous...

Edward

Excerpts from Johan Tibell's message of Fri Aug 19 11:04:48 -0400 2011:
 Hi,
 
 I'm seeing a segfault which I suspect is due to allocating Array#s of
 size 0, using newArray#. Are zero length arrays allowed? What are the
 preconditions of newArray#? It'd be great if they were documented.
 
 -- Johan
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHCJS

2011-08-02 Thread Edward Z. Yang
Excerpts from Victor Nazarov's message of Tue Aug 02 19:12:55 -0400 2011:
 I can parse arguments myself
 and throw the rest of them to parseDynamicFlags, but GHC's flags are
 really complicated and I'm not aware
 of any argument parsing library that can be used to filter out some
 specified flags and return the rest GHC's flags untouched.

Your best bet is to use '--' or something similar to demarcate GHCJS
flags, and GHC flags, and then manually split them up before passing
them off to your preferred command line parser.

Though this vaguely sounds like something that might be nice to support
in GHC proper, though I would not be the best person to ask about this.

 But GHC always emit parse error on javascript keyword.
 
 For now I'm using (abusing) ccall calling convention and simple
 imports works pretty well, but I would like to support
 exports and static/dynamic wrappers. GHC generates C-code to support
 them, and GHCJS should generate Javascript-code,
 but I have no idea how to use GHC API to generate custom (Javascript)
 stubs. Is it possible at all?

That is certainly not; you'll need to patch GHC's lexer to do that.
But it's all a bit dodgy since this FFI doesn't make sense unless you are
generating Javascript code (which GHC is not.)  Maybe someone else can
comment on that. (Perhaps we need fully extensible calling convention
syntax? Hmmm!)

 I'd like to create ghcjs and ghcjs-pkg tools that will use their own
 directories and package index and will
 not interfere with ghc. Is it possible with GHC API? How can I do it?

Check out the approach that cabal-dev uses, which is about what you
are looking for here.  (Though, handling base packages might be tricky.)

http://hackage.haskell.org/package/cabal-dev

Cheers,
Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hsc2hs and #include

2011-07-30 Thread Edward Z. Yang
This is supposed to get defined as a command line argument to the preprocessor,
see compiler/main/DriverPipeline.hs.  Are you saying you don't see it when you
run hsc2hs? Maybe someone else is calling a preprocessor but missing some of
these arguments...

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: hsc2hs and #include

2011-07-30 Thread Edward Z. Yang
No, I don't think this diagnosis is correct.  hsc2hs is outputting preprocessor
directives into hs files that GHC will then process.  Inspect your .hs file,
at least for me, I don't see #INCLUDE pragmas output at all, with latest
hsc2hs (old versions just didn't output any ifdefs, so we'd hit the problem.)

Actually, we should check if this problem is actually still around.
Remember you need to use ghc-7.0.3's hsc2hs, not an arbitrary one lying
around (though  it may work).

Cheers,
Edward


Excerpts from Evan Laforge's message of Sat Jul 30 17:10:21 -0400 2011:
 On Sat, Jul 30, 2011 at 8:32 PM, Edward Z. Yang ezy...@mit.edu wrote:
  This is supposed to get defined as a command line argument to the 
  preprocessor,
  see compiler/main/DriverPipeline.hs.  Are you saying you don't see it when 
  you
  run hsc2hs? Maybe someone else is calling a preprocessor but missing some of
  these arguments...
 
 Yes, I don't see it when I run hsc2hs.  I don't see how a define from
 ghc is going to make it into the hsc2hs generated C file, since it
 just compiles the c file with no special flags.  Looking at the hsc2hs
 source, it runs the c compiler with cFlags, which I think ultimately
 comes from the flags.  Since it doesn't import anything out of ghc I
 don't know how it's supposed to get macros from there either, unless
 that was supposed to have gone into some header.
 
 Here's my hsc2hs line:
 
 hsc2hs -v -c g++ --cflag -Wno-invalid-offsetof
 -I/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/include
 \
 -Ifltk -I. -I/usr/local/src/fltk-dev/fltk-1.3/ -D_THREAD_SAFE
 -D_REENTRANT -DMAC_OS_X_VERSION_MAX_ALLOWED=1060
 -DMAC_OS_X_VERSION_MIN_REQUIRED=1050  Ui/Style.hsc
 Executing: g++ -c Ui/Style_hsc_make.c -o Ui/Style_hsc_make.o
 -Wno-invalid-offsetof
 -I/Library/Frameworks/GHC.framework/Versions/7.0.3-x86_64/usr/lib/ghc-7.0.3/include
 -Ifltk -I. -I/usr/local/src/fltk-dev/fltk-1.3/
 Executing: g++ Ui/Style_hsc_make.o -o Ui/Style_hsc_make
 Executing: Ui/Style_hsc_make  Ui/Style.hs
 
 BTW, -Wno-invalid-offsetof was necessary to get the compiler to not
 complain about the C generated by the #poke and #peek macros, and with
 the latest version of hsc2hs I had to explicitly tell it where the ghc
 directory is... I guess it's supposed to get that out of its cabal but
 I'm not sure how.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Understanding behavior of BlockedIndefinitelyOnMVar exception

2011-07-25 Thread Edward Z. Yang
Hello Brandon,

The answer is subtle, and has to do with what references are kept in code,
which make an object considered reachable.  Essentially, the main thread
itself keeps the MVar live while it still has forking to do, so that
it cannot get garbage collected and trigger these errors.

Here is a simple demonstrative program:

main = do
lock - newMVar ()
forkIO (takeMVar lock)
forkIO (takeMVar lock)
forkIO (takeMVar lock)

Consider what the underlying code needs to do after it has performed
the first forkIO.  'lock' is a local variable that the code generator
knows it's going to need later in the function body. So what does it
do? It saves it on the stack.

// R1 is a pointer to the MVar
cqo:
Hp = Hp + 8;
if (Hp  HpLim) goto cqq;
I32[Hp - 4] = spd_info;
I32[Hp + 0] = R1;
I32[Sp + 0] = R1;
R1 = Hp - 3;
I32[Sp - 4] = spe_info;
Sp = Sp - 4;
jump stg_forkzh ();

(Ignore the Hp  HpLim; that's just the heap check.)

This lives on until we continue executing the main thread at spe_info
(at which point we may or may not deallocate the stack frame).  But what
happens instead?

cqk:
Hp = Hp + 8;
if (Hp  HpLim) goto cqm;
I32[Hp - 4] = sph_info;
I32[Hp + 0] = I32[Sp + 4];
R1 = Hp - 3;
I32[Sp + 0] = spi_info;
jump stg_forkzh ();

We keep the pointer to the MVar to the stack, because we know there
is yet /another/ forkIO (takeMVar lock) coming up. (It's located at
Sp + 4; you have to squint a little since Sp is being fiddled
with, but it's still there, we just overwrite the infotable with
a new one.)

Finally, spi_info decides we don't need the contents of Sp + 4 anymore,
and overwrites it accordingly:

cqg:
Hp = Hp + 8;
if (Hp  HpLim) goto cqi;
I32[Hp - 4] = spl_info;
I32[Hp + 0] = I32[Sp + 4];
R1 = Hp - 3;
I32[Sp + 4] = spm_info;
Sp = Sp + 4;
jump stg_forkzh ();

But in the meantime (esp. between invocation 2 and 3), the MVar cannot be
garbage collected, because it is live on the stack.

Could GHC have been more clever in this case?  Not in general, since deciding
whether or not a reference will actually be used or not boils down to the
halting problem.

loop = threadDelay 100  loop -- prevent blackholing from discovering this
main = do
lock - newEmptyMVar
t1 - newEmptyMVar
forkIO (takeMVar lock  putMVar t1 ())
forkIO (loop `finally` putMVar lock ())
takeMVar t1

Maybe we could do something where MVar references are known to be writer ends
or read ends, and let the garbage collector know that an MVar with only read
ends left is a deadlocked one.  However, this would be a very imprecise
analysis, and would not help in your original code (since all of your remaining
threads had the possibility of writing to the MVar: it doesn't become clear
that they can't until they all hit their takeMVar statements.)

Cheers,
Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Understanding behavior of BlockedIndefinitelyOnMVar exception

2011-07-24 Thread Edward Z. Yang
Excerpts from Felipe Almeida Lessa's message of Sun Jul 24 22:02:36 -0400 2011:
 Does anything change if you somehow force a GC sometime after good2?
  Perhaps with some calculation generating garbage, perhaps with
 performGC.  IIRC, the runtime detects BlockedIndefinitelyOnMVar on GC.
  But I'm probably wrong =).

That's correct.

   resurrectThreads is called after garbage collection on the list of
   threads found to be garbage.  Each of these threads will be woken
   up and sent a signal: BlockedOnDeadMVar if the thread was blocked
   on an MVar, or NonTermination if the thread was blocked on a Black
   Hole.

Cheers,
Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: memory statistics via an API ?

2011-07-19 Thread Edward Z. Yang
Not currently, but I am planning on adding this functionality in the near
future.

Excerpts from Tim Docker's message of Wed Jul 20 13:44:41 -0400 2011:
 The +RTS -s runtime arguments give some useful details the memory 
 usage of a program on exit. eg:
 
   102,536 bytes allocated in the heap
 2,620 bytes copied during GC
36,980 bytes maximum residency (1 sample(s))
28,556 bytes maximum slop
 1 MB total memory in use (0 MB lost due to fragmentation)
 
 Is there any means of obtaining this information at runtime, via some 
 API? It would be useful for monitoring a long running server process.
 
 Tim
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Hoopl: Examples of wrapFR or wrapBR?

2011-06-25 Thread Edward Z. Yang
Hello Justin,

If you grep Hoopl's source code for wrapFR and wrapBR, you can find
uses of the methods.  For example:

thenFwdRw :: forall m n f. Monad m 
  = FwdRewrite m n f 
  - FwdRewrite m n f 
  - FwdRewrite m n f
-- @ end comb1.tex
thenFwdRw rw3 rw3' = wrapFR2 thenrw rw3 rw3'
 where
  thenrw :: forall m1 e x t t1.
Monad m1 =
(t - t1 - m1 (Maybe (Graph n e x, FwdRewrite m n f)))
- (t - t1 - m1 (Maybe (Graph n e x, FwdRewrite m n f)))
- (t - t1 - m1 (Maybe (Graph n e x, FwdRewrite m n f)))
  thenrw rw rw' n f = rw n f = fwdRes
 where fwdRes Nothing   = rw' n f
   fwdRes (Just gr) = return $ Just $ fadd_rw rw3' gr

This usage of wrapFR2 doesn't take advantage of the extra polymorphism of
wrapFR2, but it takes two forward rewrites and composes them into a single 
rewrite.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Superclass equalities

2011-06-22 Thread Edward Z. Yang
Yay! This is very exciting :-)

Edward

Excerpts from Simon Peyton-Jones's message of Wed Jun 22 12:57:28 -0400 2011:
 Friends
 
 I have long advertised a plan to allow so-called superclass equalities.  I've 
 just pushed patches to implement them. So now you can write
 
 class (F a ~ b) = C a b where  { ... }
 
 This email is just to encourage you to try them out.  
 
 Currently this is just in the HEAD git repository.  It'll appear in GHC 7.2, 
 a release we are now actively preparing.  But the feature isn't heavily 
 tested (since it didn't exist until today), so I'd appreciate people trying 
 it out.
 
 Thanks
 
 Simon
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: gitweb on darcs.haskell.org?

2011-06-21 Thread Edward Z. Yang
All of the GHC repos are mirrored to Github, which offers similar
facilities.  Of course, it wouldn't be too much work to setup
gitweb on darcs.haskell.org, I don't think.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: MonoLocalBinds and hoopl

2011-06-17 Thread Edward Z. Yang
In case it wasn't clear, I'd very much be in favor of implementing
this refinement.

Cheers,
Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: MonoLocalBinds and hoopl

2011-06-14 Thread Edward Z. Yang
I ran into some more code like this, and I realized there was something
pretty important: the majority of let-bindings do not have any free varaibles.
They could very well be floated to the top level without having to make any
source level changes.

So maybe let should be generalized, if no free variables are captured.
Some food for thought.

Cheers,
Edward

Excerpts from Edward Z. Yang's message of Thu Dec 09 10:28:20 -0500 2010:
 Hello all,
 
 Here's an experience report for porting hoopl to manage MonoLocalBinds.  The
 Compiler.Hoop.XUtil module has a rather interesting (but probably common) 
 style of code
 writing, along the lines of this:
 
 fbnf3 (ff, fm, fl) block = unFF3 $ scottFoldBlock (ScottBlock f m l cat) block
 where f n = FF3 $ ff n
   m n = FF3 $ fm n
   l n = FF3 $ fl n
   FF3 f `cat` FF3 f' = FF3 $ f' . f
 
 f, m, l and cat are polymorphic functions that are only used once in the
 main expression, and are floated outside to improve readability.  However, 
 when
 MonoLocalBinds is turned on, these all become monomorphic and the definitions
 fail.  In contrast, this (uglier) version typechecks:
 
 fbnf3 (ff, fm, fl) block = unFF3 $ scottFoldBlock (ScottBlock (FF3 . ff) (FF3 
 . fm) (FF3 . fl) (\(FF3 f) (FF3 f') - FF3 $ f' . f)) block
 
 One suggestion that I had was that we should generalize local bindings that
 are only used once, but Marlow pointed out that this would make the 
 typechecker
 more complex and I probably would agree.
 
 As a userspace developer, I have two options:
 
 1. Bite the bullet and put in the polymorphic type signatures (which
can be quite hefty)
 2. Inline the definitions
 3. Move the polymorphic functions into the global namespace
 
 (3) and (2) are not so nice because it breaks the nice symmetry between these
 definitions, which always define f, m, l for the many, many definitions in
 Hoopl of this style.
 
 Cheers,
 Edward
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: testsuite results

2011-05-15 Thread Edward Z. Yang
To chime in, latest validate for me on x86-32 had two fails:

OVERALL SUMMARY for test run started at Sun May 15 16:16:28 BST 2011
2773 total tests, which gave rise to
   10058 test cases, of which
   0 caused framework failures
7598 were skipped

2377 expected passes
  81 expected failures
   0 unexpected passes
   2 unexpected failures

Unexpected failures:
   T3064(normal)   (the improved GHC)
   T5084(normal)   (the now typechecking expr)

Edward

Excerpts from Daniel Fischer's message of Thu May 12 10:59:01 -0400 2011:
 Running the testsuite with today's HEAD (perf build, but without profiling 
 to keep time bearable) resulted in:
 
 
 OVERALL SUMMARY for test run started at Do 12. Mai 13:34:13 CEST 2011
 2765 total tests, which gave rise to
 9300 test cases, of which
0 caused framework failures
 1587 were skipped
 
 7467 expected passes
  229 expected failures
9 unexpected passes
8 unexpected failures
 
 
 Pretty cool, I can't remember having so few unexpected failures before.
 
 
 Unexpected failures:
T5084(normal)
 
 That's  the compiler not complaining about an INLINE-pragma on a class 
 method without default implementation. Patch is in ghc-generics branch, not 
 yet in master, according to #5084. Anyway it's nothing serious (was a 
 feature request, not a bug).
 
dph-diophantine-opt(normal,threaded1,threaded2)
 
 These are due to a missing Show instance for [:Int:], a library issue.
 
dph-words-opt(normal)
 
 Fails with dph-words-opt: libraries/vector/Data/Vector/Generic.hs:369 
 (slice): invalid slice (1,2,2).
 No idea whether that's a library or a compiler issue.
 
hpc_markup_multi_001(normal)
hpc_markup_multi_002(normal)
hpc_markup_multi_003(normal)
 
 Those are due to hpc looking in the wrong directory for the tix files, 
 patch exists, but is not yet in the master branch, according to #5069.
 
 So, of the eight unexpected failures, six are due to trivia (they *might* 
 fail for other causes when the trivia are fixed, but there's no reason to 
 expect that), one is a feature request whose test reached testsuite/master 
 before the implementation reached ghc/master and only one may (but need 
 not) indicate a compiler bug at present, that's rather awesome.
 
 
 
 Unexpected passes:
mc01(hpc,ghci)
mc06(hpc,ghci)
mc08(hpc,ghci)
mc11(hpc)
mc16(hpc)
mc18(hpc)
 
 All these involve the new MonadComprehensions extension, they're expected 
 to work and do so for the normal and optasm ways, maybe they should also be 
 expected to work for hpc and ghci.
 
 
 Additionally, sometimes conc016(threaded2) passes unexpectedly; which 
 thread first gets its exception to the other one is impossible to predict:
 
 -- NB. this test is delicate since 6.14, because throwTo is now always
 -- interruptible, so the main thread's killThread can be legitimately
 -- interrupted by the child thread's killThread, rather than the other
 -- way around.  This happens because the child thread is running on
 -- another processor, so the main thread's throwTo is blocked waiting
 -- for a response, and while waiting it is interruptible.
 
 
 Summing up: Yay!
 

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Tracing idea

2011-05-12 Thread Edward Z. Yang
Thinking about this in more detail, I think this facility would be most
useful if it also contained information about what module/file the code
came from.  I'm currently attempting to track down a bug in the code generator
which I know is due to a miscompiled library, and it would make my life
substantially easier if there was an easy way to narrow down possible crash
sites.

I suppose one method already available to me is to get a list of suspicious
identifiers and then cross reference these against the generated object files.

Edward

Excerpts from Simon Marlow's message of Tue Mar 01 05:46:01 -0500 2011:
 On 21/02/2011 01:08, Edward Z. Yang wrote:
  Excerpts from Tyson Whitehead's message of Sun Feb 20 07:14:56 -0500 2011:
  I believe a back trace on the actual call stack is generally considered not
  that useful in a lazy language as it corresponds to the evaluation 
  sequence,
  That is, it is demand centric while written code is production centric
 
  Yeah, such a buffer wouldn't be very useful for end-users; I'm thinking more
  in terms of going backwards in time for the STG execution.
 
 Yes, that might be useful.  However it would require compiling all the 
 libraries with that option too - so it would be an internal debug option 
 for use with a live GHC build, not something you could use with a 
 pre-built GHC (well, you could use it, but you wouldn't get traces for 
 library code).
 
 Cheers,
 Simon

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: performance issues in simple arithmetic code

2011-04-28 Thread Edward Z. Yang
Excerpts from Denys Rtveliashvili's message of Thu Apr 28 04:41:48 -0400 2011:
 Well.. I found some places in C-- compiler which are supposed to convert 
 division and multiplication by 2^n into shifts. And I believe these work 
 sometimes.

 However in this case I am a bit puzzled because even if I change the 
 constants in my example to 2^n like 1024 the code is not optimised.

You are referring to the mini-optimizer in cmm/CmmOpt.hs, correct?
Specifically:

cmmMachOpFold mop args@[x, y@(CmmLit (CmmInt n _))]
  = case mop of
MO_Mul rep
   | Just p - exactLog2 n -
 cmmMachOpFold (MO_Shl rep) [x, CmmLit (CmmInt p rep)]
MO_U_Quot rep
   | Just p - exactLog2 n -
 cmmMachOpFold (MO_U_Shr rep) [x, CmmLit (CmmInt p rep)]
MO_S_Quot rep
   | Just p - exactLog2 n, 
 CmmReg _ - x -   -- We duplicate x below, hence require

See the third case.  This appears to be something of a delicate special case,
in particular, the incoming argument is required to be a register, which is not
the case in many instances:

sef_ret()
{ [const 0;, const 34;]
}
ceq:
Hp = Hp + 8;
if (Hp  I32[BaseReg + 92]) goto ceu;
_seg::I32 = %MO_S_Quot_W32(I32[R1 + 3], 1024); -- oops, it's a 
memory load
I32[Hp - 4] = GHC.Types.I#_con_info;
I32[Hp] = _seg::I32;
R1 = Hp - 3;
Sp = Sp + 4;
jump I32[Sp] ();
ceu:
I32[BaseReg + 112] = 8;
jump (I32[BaseReg - 8]) ();
}

(This is optimized Cmm, which you can get with -ddump-opt-cmm).

Multiplication, on the other hand, manages to pull it off more frequently:

sef_ret()
{ [const 0;, const 34;]
}
ceq:
Hp = Hp + 8;
if (Hp  I32[BaseReg + 92]) goto ceu;
_seg::I32 = I32[R1 + 3]  10;
I32[Hp - 4] = GHC.Types.I#_con_info;
I32[Hp] = _seg::I32;
R1 = Hp - 3;
Sp = Sp + 4;
jump I32[Sp] ();
ceu:
I32[BaseReg + 112] = 8;
jump (I32[BaseReg - 8]) ();
}

This might be a poor interaction with the inliner. I haven't investigated fully 
though.

 By the way, is there any kind of documentation on how to hack C-- compiler?
 In particular, I am interested in:
 * how to run its optimiser against some C-- code and see what does it do
 * knowing more about its internals

GHC supports compiling C-- code; just name your file with a .cmm extension
and GHC will parse it and, if it's the native backend, do some minor
optimizations and register allocation.

As usual, the GHC Trac has some useful information:

- http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/CmmType
- http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/Backends/NCG

I also highly recommend reading cmm/OldCmm.hs and cmm/CmmExpr.hs, which explain
the internal AST we use for Cmm, as well as cmm/OldCmmPpr.hs and cmm/CmmParse.y 
(and
cmm/CmmLex.x) to understand textual C--.  Note that there is also a new C--
representation hanging around that is not too interesting for you, since we 
don't
use it at all without the flag -fnew-codegen.

Edward

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


  1   2   >