Re: Blocking a task indefinitely in the RTS

2019-01-06 Thread John Lato
Can you use an os-level structure? E.g. block on a file descriptor, socket,
or something like that?

On Sun, Jan 6, 2019, 10:37 Phyx  Hi All,
>
> I'm looking for a way to block a task indefinitely until it is woken up by
> an external event in both the threaded and non-threaded RTS and returns a
> value that was stored/passed. MVar works great for the threaded RTS, but
> for the non-threaded there's a bunch of deadlock detection in the scheduler
> that would forcibly free the lock and resume the task with an opaque
> exception. This means that MVar and anything derived from it is not usable.
>
> STMs are more expensive but also have the same deadlock code. So again no
> go. The reason it looks like a deadlock to the RTS is that the "Wake-up"
> call in the non-threaded rts will come from C code running inside the RTS.
> The RTS essentially just sees all tasks blocked on it's main capability and
> (usually rightly so) assumes a deadlock occurred.
>
> You have other states like BlockedOnForeign etc but those are not usable
> as a primitive. Previous iterations of I/O managers have each invented
> primitives for this such as asyncRead#, but they are not general and can't
> be re-used, and requires a different solution for threaded and non-threaded.
>
> I have started making a new primitive IOPort for this, based on the MVar
> code, but this is not trivial... (currently I'm getting a segfault
> *somewhere* in the primitive's cmm code). The reason is that the semantics
> are decidedly different from what MVars guarantee. I should also mention
> that this is meant to be internal to base (i.e no exported).
>
> So before I continue down this path and some painful debugging..., does
> anyone know of a way to block a task, unblock it later and pass a value
> back? It does not need to support anything complicated such as multiple
> take/put requests etc.
>
> Cheers,
> Tamar
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: True multi stage Haskell

2017-11-26 Thread John Lato
Hi Tim,

Several years ago I wrote a proof of concept of one way to implement this
in Haskell,
http://johnlato.blogspot.com/2012/10/runtime-meta-programming-in-haskell.html
.

I used TH to automatically lift expressions into a newtype-wrapped ExpQ
that could be used for staged evaluation. More than one stage would
probably have been tedious though, and a major problem with any non-trivial
code was getting all the imports in place. I do think it would have been
possible to automatically track imports though.

Of course that was 5 years ago, so state of the art has moved on. TH has
better facilities for this now, and the ghc API probably has something
better too.


On Fri, Nov 17, 2017, 10:06 Tim Sheard  wrote:

> After many years of hoping someone else would do this, I would like to
> make GHC into a true multi-stage programming language. Here is how I
> thought I might approach this.
>
> 1) Use the GHC as a library module.
> 2) Use the LLVM backend.
>
> I have no experience with either of these tools.
> Lets start simple, How would I write functions like
>
> compile :: String -> IO PtrToLLVMCode  -- where the string is a small
> Haskell program.
> llvmCompile :: PtrToLLVMCode -> IO PtrToMachineCode
> jumpTo:: PtrToMachineCode -> IO ans   -- where ans is the "type" of the
> string.
>
>
> Any thoughts on how to get started? What papers to read, examples to
> look at?
>
> I'd love to move to some more disciplined input type, a sort of (mono)
> typed program
> representation (with similar complexity) to Template Haskell Exp type.
>
> where (Exp t) is a data structure representing a Haskell program of type t.
>
> All offers of advice accepted. I'm trying to get started soon, and good
> advice
> about what to avoid is especially welcome.  If any one wanted to help
> with this,
> that would be great.
>
> Tim Sheard
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Hoopl and Arrows

2015-03-03 Thread John Lato
Thomas,

I'd be very careful mixing effectful arrows with the CCA framework.
Effectful arrows are not necessarily commutative, so the CCA
transformations may be invalid.

I did something very similar a couple years ago, by creating an arrow
instance that generated TH expressions.  IME the ghc optimizer handled it
extremely well, and no extra optimizations were necessary.

John L.

On 14:43, Mon, Mar 2, 2015 Justin Bailey jgbai...@gmail.com wrote:

 Thomas,

 I did some stuff with Hoopl a few years ago as part of my Master's
 thesis (Using Dataflow Optimization Techniques with a Monadic
 Intermediate Language). In Chapter 3 (The Hoopl Library), I wrote
 up an overview of Hoopl and work through a new example (new at the
 time, at least).

 You can download my thesis from http://mil.codeslower.com. The example
 code from that chapter is available at
 https://github.com/m4dc4p/mil/blob/master/thesis/DeadCodeC.lhs. No
 guarantees that it still compiles, but I'm glad to try and help needed
 :)


 On Mon, Mar 2, 2015 at 1:34 PM, Simon Peyton Jones
 simo...@microsoft.com wrote:
  Thomas
 
 
 
  Hoopl works on control flow graphs, whereas Core is lambda calculus.  I
  really don’t think Hoopl is going to be any good for optimising Core.
 
 
 
  Ask the ghc-devs mailing list too.  Someone there has been writing about
  arrows recently (take a look at the archive).
 
 
 
  Simon
 
 
 
  From: Thomas Bereknyei [mailto:tombe...@gmail.com]
  Sent: 02 March 2015 17:07
  To: n...@cs.tufts.edu; Simon Peyton Jones
  Subject: Hoopl and Arrows
 
 
 
  I've been interested in Arrows and their optimization recently. I've run
  into problems during basic tooling so I built a quasiquoter for proc
  notation (still working on rec and let statements, but the rest of it
 works)
  that converts from haskell-src-exts into a desugarred TH format. My
 original
  goal was to implement Casual Commutative Arrows  [1] but to make it
 more
  user friendly and to include arrows with effects. I came across Hoopl
 and at
  first glance I would think it can help with Arrow optimization,
 especially
  with the tuple shuffling from desuggared proc-do notation. In spite of
 the
  slogan of Hoopl: Dataflow Optimization made Simple and the git repo's
  testing folder, I'm not 100% clear on Hoopl usage. I would be interested
 in
  anything that can get me started or even a thought or two whether it
 would
  be worth looking into for Arrows.
 
  Simon, regarding [2], my quasiquoter can be rewritten to take `bind`,
  `bind_`, `fixA`, and `ifThenElseA` from scope rather than being built in.
  Would that help with #7828?  I am not very familiar with GHC internals at
  the moment, and I am confused about the complexity of dsArrows.hs in
  contrast with my own parser. It seems I have a good start on it in the
  Parser module at [3], WARNING: untested and rough draft code, though
  suggestions and pointers are welcome. I was thinking of sending some of
 my
  code to haskell-src-meta because they can't yet translate their version
 of
  Exp into TH's ExpQ.
 
  -Tom
 
 
  [1] (https://hackage.haskell.org/package/CCA)
  [2] (https://ghc.haskell.org/trac/ghc/ticket/7828#comment:54)
  [3] (https://www.fpcomplete.com/user/tomberek/rulestesting)
 
 
  ___
  ghc-devs mailing list
  ghc-devs@haskell.org
  http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
 
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Deprecating functions

2015-01-09 Thread John Lato
I agree with Johan.  I do think it makes sense to remove
deprecated/replaced functions, but only after N+2 cycles.

On 06:18, Fri, Jan 9, 2015 Richard Eisenberg e...@cis.upenn.edu wrote:


 On Jan 9, 2015, at 5:37 AM, Jan Stolarek jan.stola...@p.lodz.pl wrote:

  Especially that we're talking about internal TH module - I'll be
 surprised if
  there are more than 10 users.

 As I understand it, TH.Lib is not an internal module. Though I,
 personally, have never found the functions there to suit my needs as a
 user, I think the functions exported from there are the go-to place for
 lots of people using TH. For example, Ollie Charles's recent blog post on
 TH (https://ocharles.org.uk/blog/guest-posts/2014-12-22-
 template-haskell.html), written by Sean Westfall, uses functions exported
 from TH.Lib.

 I'm rather ambivalent on the deprecate vs. remove vs. hide vs. leave alone
 debate, but I do think we should treat TH.Lib as a fully public module as
 we're debating.

 Richard
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Right way to turn off dynamic linking in build.mk

2014-12-30 Thread John Lato
I don't have an authoritative answer, but in the past I set
DYNAMIC_GHC_PROGRAMS and DYNAMIC_BY_DEFAULT as Johan originally suggested
and didn't have any issues with the resulting build.

On Tue Dec 30 2014 at 9:00:07 AM Johan Tibell johan.tib...@gmail.com
wrote:

 Not a good answer, I just set

 GhcLibWays = v
 DYNAMIC_GHC_PROGRAMS = NO
 DYNAMIC_BY_DEFAULT   = NO

 at the bottom of the file. This feels a bit hacky because we're overriding 
 GhcLibWays
 (e.g. we could be dropping the prof way by accident). I think it should be
 possible to state our desire (i.e. I don't want dyn) somewhere in the file
 and have that just work. Trying to manually change things like GhcLibWays
 is error prone.

 On Tue, Dec 30, 2014 at 11:48 AM, Tuncer Ayaz tuncer.a...@gmail.com
 wrote:

 On Thu, Dec 18, 2014 at 9:52 AM, Johan Tibell johan.tib...@gmail.com
 wrote:
  Some times when I play around with GHC I'd like to turn off dynamic
  linking to make GHC compile faster. I'm not sure what the right way
  to do this in build.mk. It's made confusing by the conditional
  statements in that file:
 
  GhcLibWays = $(if $(filter $(DYNAMIC_GHC_PROGRAMS),YES),v dyn,v)
 
  This line make me worry that if I don't put
 
  DYNAMIC_GHC_PROGRAMS = NO
 
  in the right place in build.mk it wont take.
 
  There's also this one:
 
  ifeq $(PlatformSupportsSharedLibs) YES
  GhcLibWays += dyn
  endif
 
  Seeing this makes me wonder if
 
  DYNAMIC_GHC_PROGRAMS = NO
  DYNAMIC_BY_DEFAULT   = NO
 
  is enough or if the build system still sniffs out the fact that my
  platform supports dynamic linking.
 
  Could someone please give an authoritative answer to how to turn off
  dynamic linking?

 Hi Johan,

 did you find the answer?


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: performance regressions

2014-12-17 Thread John Lato
Is it possible INLINE didn't inline the function because it's recursive? If
it were my function, I'd probably try a manual worker /wrapper.

On 07:59, Wed, Dec 17, 2014 Simon Peyton Jones simo...@microsoft.com
wrote:

 I still would like to understand why INLINE does not make it inline.
 That's weird.

 Eg way to reproduce.

 Simion

 |  -Original Message-
 |  From: Richard Eisenberg [mailto:e...@cis.upenn.edu]
 |  Sent: 17 December 2014 15:56
 |  To: Simon Peyton Jones
 |  Cc: Joachim Breitner; ghc-devs@haskell.org
 |  Subject: Re: performance regressions
 |
 |  By unsubstantiated guess is that INLINEABLE would have the same effect
 |  as INLINE here, as GHC doesn't see fit to actually inline the
 |  function, even with INLINE -- the big improvement seen between (1) and
 |  (2) is actually specialization, not inlining. The jump from (2) to (3)
 |  is actual inlining. Thus, it seems that GHC's heuristics for inlining
 |  aren't working out for the best here.
 |
 |  I've pushed my changes, though I agree with Simon that more research
 |  may uncover even more improvements here. I didn't focus on the number
 |  of calls because that number didn't regress. Will look into this soon.
 |
 |  Richard
 |
 |  On Dec 17, 2014, at 4:15 AM, Simon Peyton Jones
 |  simo...@microsoft.com wrote:
 |
 |   If you use INLINEABLE, that should make the function specialisable
 |  to a particular monad, even if it's in a different module. You
 |  shouldn't need INLINE for that.
 |  
 |   I don't understand the difference between cases (2) and (3).
 |  
 |   I am still suspicious of why there are so many calls to this one
 |  function that it, alone, is allocating a significant proportion of
 |  compilation of the entire run of GHC.  Are you sure there isn't an
 |  algorithmic improvement to be had, to simply reduce the number of
 |  calls?
 |  
 |   Simon
 |  
 |   |  -Original Message-
 |   |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of
 |   | Richard Eisenberg
 |   |  Sent: 16 December 2014 21:46
 |   |  To: Joachim Breitner
 |   |  Cc: ghc-devs@haskell.org
 |   |  Subject: Re: performance regressions
 |   |
 |   |  I've learned several very interesting things in this analysis.
 |   |
 |   |  - Inlining polymorphic methods is very important. Here are some
 |   | data  points to back up that claim:
 |   | * Original implementation using zipWithAndUnzipM:
 |  8,472,613,440
 |   |  bytes allocated in the heap
 |   | * Adding {-# INLINE #-} to the definition thereof:
 |  6,639,253,488
 |   |  bytes allocated in the heap
 |   | * Using `inline` at call site to force inlining:
 |  6,281,539,792
 |   |  bytes allocated in the heap
 |   |
 |   |  The middle step above allowed GHC to specialize zipWithAndUnzipM
 |  to
 |   | my  particular monad, but GHC didn't see fit to actually inline
 |  the
 |   | function. Using `inline` forced it, to good effect. (I did not
 |   | collect  data on code sizes, but it wouldn't be hard to.)
 |   |
 |   |  By comparison:
 |   | * Hand-written recursion:6,587,809,112 bytes allocated in
 |  the
 |   |  heap
 |   |  Interestingly, this is *not* the best result!
 |   |
 |   |  Conclusion: We should probably add INLINE pragmas to Util and
 |   | MonadUtils.
 |   |
 |   |
 |   |  - I then looked at rejiggering the algorithm to keep the common
 |   | case  fast. This had a side effect of changing the
 |  zipWithAndUnzipM
 |   | to  mapAndUnzipM, from Control.Monad. To my surprise, this brought
 |   | disaster!
 |   | * Using `inline` and mapAndUnzipM:7,463,047,432 bytes
 |   |  allocated in the heap
 |   | * Hand-written recursion: 5,848,602,848 bytes
 |   |  allocated in the heap
 |   |
 |   |  That last number is better than the numbers above because of the
 |   | algorithm streamlining. But, the inadequacy of mapAndUnzipM
 |   | surprised  me -- it already has an INLINE pragma in Control.Monad
 |  of course.
 |   |  Looking at -ddump-simpl, it seems that mapAndUnzipM was indeed
 |   | getting  inlined, but a call to `map` remained, perhaps causing
 |   | extra  allocation.
 |   |
 |   |  Conclusion: We should examine the implementation of mapAndUnzipM
 |   | (and  similar functions) in Control.Monad. Is it as fast as
 |  possible?
 |   |
 |   |
 |   |
 |   |  In the end, I was unable to bring the allocation numbers down to
 |   | where  they were before my work. This is because the flattener now
 |   | deals in  roles. Most of its behavior is the same between nominal
 |   | and  representational roles, so it seems silly (though very
 |   | possible) to  specialize the code to nominal to keep that path
 |  fast.
 |   | Instead, I  identified one key spot and made that go fast.
 |   |
 |   |  Thus, there is a 7% bump to memory usage on very-type-family-
 |  heavy
 |   | code, compared to before my commit on Friday. (On more ordinary
 |   | code,  there is no noticeable change.)
 |   |
 |   |  Validating my patch 

Re: Reviving the LTS Discussions (ALT: A separate LTS branch)

2014-11-09 Thread John Lato
Austin, thanks for starting this thread.  I think David raises a lot of
very important points.  In particular, I don't think an LTS plan can be
successful unless there's significant demand (probably from commercial
Haskell users), and it would almost certainly require that those users make
a commitment to do some of the work themselves.

I think David's suggestion is probably a good place to start.  For that
matter, anyone who's interested could probably just fork the github repo,
pull in patches they want, and work away, but it does seem a bit nicer/more
efficient to be able to integrate with trac.

John

On Fri Nov 07 2014 at 11:45:34 PM David Feuer david.fe...@gmail.com wrote:

 GHC is an open source project. People work on it because

 1. They enjoy it and find it interesting,
 2. They need it to work well to support their own software,
 3. They're trying to write a paper/get a degree/impress their peers, or,
 in very rare cases,
 4. Someone pays them to do it.
 People are also willing to do some kinds of minor maintenance work because
 5. They feel a sense of obligation to the community
 but this is not likely, on its own, to keep many people active.

 What does this have to do with LTS releases? The fact is that having
 people who want an LTS release does not necessarily mean that anyone else
 should do much of anything about it. If they don't really care, they're
 likely to half-build an LTS process and then get sidetracked.

 So what do I think should be done about this? I think GHC headquarters
 should make a standing offer to any person, group, or company interested in
 producing an LTS release: an offer of Trac, and Phabricator, and
 Harbormaster, and generally all the infrastructure that GHC already uses.
 Also an offer of advice on how to manage releases, deal with common issues,
 etc. But a promise of programming power seems likely to be an empty one,
 and I don't see the point of trying to push it. If someone wants an LTS
 release, they need to either make one themselves or pay someone to do the
 job.
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-21 Thread John Lato
Perhaps you misunderstood my proposal if you think it would prevent anyone
else from defining instances of those classes?  Part of the proposal was
also adding support to the compiler to allow for a multiple files to use a
single module name.  That may be a larger technical challenge, but I think
it's achievable.

I think one key difference is that my proposal puts the onus on class
implementors, and David's puts the onus on datatype implementors, so they
certainly are complementary and could co-exist.

On Tue, Oct 21, 2014 at 9:11 AM, David Feuer david.fe...@gmail.com wrote:

 As I said before, it still doesn't solve the problem I'm trying to solve.
 Look at a package like criterion, for example. criterion depends on aeson.
 Why? Because statistics depends on it. Why? Because statistics wants a
 couple types it defines to be instances of classes defined in aeson. John
 Lato's proposal would require the pragma to appear in the relevant aeson
 module, and would prevent *anyone* else from defining instances of those
 classes. With my proposal, statistics would be able to declare

 {-# InstanceIn Statistics.AesonInstances AesonModule.AesonClass
 StatisticsType #-}

 Then it would split the Statistics.AesonInstances module off into a
 statistics-aeson package and accomplish its objective without stepping on
 anyone else. We'd get a lot more (mostly tiny) packages, but in exchange
 the dependencies would get much thinner.
 On Oct 21, 2014 11:52 AM, Stephen Paul Weber singpol...@singpolyma.net
 wrote:

 Somebody claiming to be John Lato wrote:

 Thinking about this, I came to a slightly different scheme.  What if we
 instead add a pragma:

 {-# OrphanModule ClassName ModuleName #-}


 I really like this.  It solve all the real orphan instance cases I've had
 in my libraries.

 --
 Stephen Paul Weber, @singpolyma
 See http://singpolyma.net for how I prefer to be contacted
 edition right joseph


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-19 Thread John Lato
Thinking about this, I came to a slightly different scheme.  What if we
instead add a pragma:

{-# OrphanModule ClassName ModuleName #-}

and furthermore require that, if OrphanModule is specified, all instances
can *only* appear in the module where the class is defined, the involved
types are defined, or the given OrphanModule?  We would also need to add
support for the compiler to understand that multiple modules may appear
under the same name, which might be a bit tricky to implement, but I think
it's feasible (perhaps in a restricted manner).

I think I'd prefer this when implementing orphan instances, and probably
when writing the pragmas as well.

On Mon, Oct 20, 2014 at 1:02 AM, David Feuer david.fe...@gmail.com wrote:

 Orphan instances are bad. The standard approach to avoiding the orphan
 hazard is to always put an instance declaration in the module that declares
 the type or the one that declares the class. Unfortunately, this forces
 packages like lens to have an ungodly number of dependencies. Yesterday, I
 had a simple germ of an idea for solving this (fairly narrow) problem, at
 least in some cases: allow a programmer to declare where an instance
 declaration must be. I have no sense of sane syntax, but the rough idea is:

 {-# InstanceIn NamedModule [Context =] C1 T1 [T2 ...] #-}

 This pragma would appear in a module declaring a class or type. The named
 module would not have to be available, either now or ever, but attempting
 to declare such an instance in any module *other* than the named one would
 be an error by default, with a flag
 -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional
 context allows multiple such pragmas to appear in the type/class-declaring
 modules, to allow overlapping instances (all of them declared in advance).

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Avoiding the hazards of orphan instances without dependency problems

2014-10-19 Thread John Lato
I fail to see how this doesn't help lens, unless we're assuming no buy-in
from class declarations.  Also, your approach would require c*n pragmas to
be declared, whereas mine only requires c.  Also your method seems to
require having both the class and type in scope, in which case one could
simply declare the instance in that module anyway.

On Mon, Oct 20, 2014 at 9:29 AM, David Feuer david.fe...@gmail.com wrote:

 I don't think your approach is flexible enough to accomplish the purpose.
 For example, it does almost nothing to help lens. Even my approach should,
 arguably, be extended transitively, allowing the named module to delegate
 that authority, but such an extension could easily be put off till later.
 On Oct 19, 2014 7:17 PM, John Lato jwl...@gmail.com wrote:

 Thinking about this, I came to a slightly different scheme.  What if we
 instead add a pragma:

 {-# OrphanModule ClassName ModuleName #-}

 and furthermore require that, if OrphanModule is specified, all instances
 can *only* appear in the module where the class is defined, the involved
 types are defined, or the given OrphanModule?  We would also need to add
 support for the compiler to understand that multiple modules may appear
 under the same name, which might be a bit tricky to implement, but I think
 it's feasible (perhaps in a restricted manner).

 I think I'd prefer this when implementing orphan instances, and probably
 when writing the pragmas as well.

 On Mon, Oct 20, 2014 at 1:02 AM, David Feuer david.fe...@gmail.com
 wrote:

 Orphan instances are bad. The standard approach to avoiding the orphan
 hazard is to always put an instance declaration in the module that declares
 the type or the one that declares the class. Unfortunately, this forces
 packages like lens to have an ungodly number of dependencies. Yesterday, I
 had a simple germ of an idea for solving this (fairly narrow) problem, at
 least in some cases: allow a programmer to declare where an instance
 declaration must be. I have no sense of sane syntax, but the rough idea is:

 {-# InstanceIn NamedModule [Context =] C1 T1 [T2 ...] #-}

 This pragma would appear in a module declaring a class or type. The
 named module would not have to be available, either now or ever, but
 attempting to declare such an instance in any module *other* than the named
 one would be an error by default, with a flag
 -XAllowForbiddenInstancesAndInviteNasalDemons to turn it off. The optional
 context allows multiple such pragmas to appear in the type/class-declaring
 modules, to allow overlapping instances (all of them declared in advance).

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs



___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Tentative high-level plans for 7.10.1

2014-10-08 Thread John Lato
Speaking for myself, I don't think the question of doing a 7.8.4 release at
all needs to be entangled with the LTS issue.

On Wed, Oct 8, 2014 at 8:23 AM, Edward Z. Yang ezy...@mit.edu wrote:

 Excerpts from Herbert Valerio Riedel's message of 2014-10-08 00:59:40
 -0600:
  However, should GHC 7.8.x turn out to become a LTS-ishly maintained
  branch, we may want to consider converting it to a similiar Git
  structure as GHC HEAD currently is, to avoid having to keep two
  different sets of instructions on the GHC Wiki for how to work on GHC
  7.8 vs working on GHC HEAD/7.10 and later.

 Emphatically yes.  Lack of submodules on the 7.8 branch makes working with
 it /very/ unpleasant.

 Edward
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Tentative high-level plans for 7.10.1

2014-10-07 Thread John Lato
Ok, if the ghc devs decide to do a 7.8.4 release, I will explicitly commit
to helping backport patches.

However, I don't know how to do so.  Therefore, I'm going to ask Austin (as
he's probably the most knowledgeable) to update the 7.8.4 wiki page with
the process people should use to contribute backports.  I'm guessing it's
probably something like this:

checkout the 7.8.4 release branch (which branch is it? ghc-7.8?)
git cherry-pick the desired commit(s)
? (make a phab request for ghc-hq to review?)
update Trac with what you've done

(or if this is already documented somewhere, please point me to it).

Unfortunately this doesn't have any way of showing that I'm working on a
specific backport/merge, so there's potential for duplicate work, which
isn't great.  I also agree with Nicolas that it's likely possible to make
better use of git to help with this sort of work, but that's a decision for
ghc hq so I won't say any more on that.

Cheers,
John


On Tue, Oct 7, 2014 at 4:12 PM, Simon Peyton Jones simo...@microsoft.com
wrote:

 Thanks for this debate.  (And thank you Austin for provoking it by
 articulating a medium term plan.)

 Our intent has always been that that the latest version on each branch is
 solid.  There have been one or two occasions when we have knowingly
 abandoned a dodgy release branch entirely, but not many.

 So I think the major trick we are missing is this:

We don't know what the show-stopping bugs on a branch are

 For example, here are three responses to Austin's message:

 |  The only potential issue here is that not a single 7.8 release will be
 |  able to bootstrap LLVM-only targets due to #9439. I'm not sure how

 | 8960 looks rather serious and potentially makes all of 7.8 a no-go
 | for some users.

 |  We continue to use 7.2, at least partly because all newer versions of
 |  ghc have had significant bugs that affect us

 That's not good. Austin's message said about 7.8.4 No particular pressure
 on any outstanding bugs to release immediately. There are several dozen
 tickets queued up on 7.8.4 (see here
 https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.8.4), but 95% of them
 are nice to have.

 So clearly the message is not getting through.


 My conclusion

  * I think we (collectively!) should make a serious attempt to fix
 show-stopping
bugs on a major release branch.  (I agree that upgrading to the next
 major
release often simply brings in a new wave of bugs because of GHC's
rapid development culture.)

  * We can only possibly do this if
a) we can distinguish show-stopping from nice to have
b) we get some help (thank you John Lato for implicitly offering)

 I would define a show-stopping bug as one that simply prevents you from
 using the release altogether, or imposes a very large cost at the user end.

 For mechanism I suggest this.  On the 7.8.4 status page (or in general, on
 the release branch page you want to influence), create a section Show
 stoppers with a list of the show-stopping bugs, including some
 English-language text saying who cares so much and why.  (Yes I know that
 it might be there in the ticket, but the impact is much greater if there is
 an explicit list of two or three personal statements up front.)

 Concerning 7.8.4 itself, I think we could review the decision to abandon
 it, in the light of new information.  We might, for example, fix
 show-stoppers, include fixes that are easy to apply, and not-include other
 fixes that are harder.

 Opinions?  I'm not making a ruling here!

 Simon

 |  -Original Message-
 |  From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of Ben
 |  Gamari
 |  Sent: 04 October 2014 04:52
 |  To: Austin Seipp; ghc-devs@haskell.org
 |  Cc: Simon Marlow
 |  Subject: Re: Tentative high-level plans for 7.10.1
 |
 |  Austin Seipp aus...@well-typed.com writes:
 |
 |  snip.
 |
 |  
 |   We do not believe we will ship a 7.8.4 at all, contrary to what you
 |   may have seen on Trac - we never decided definitively, but there is
 |   likely not enough time. Over the next few days, I will remove the
 |   defunct 7.8.4 milestone, and re-triage the assigned tickets.
 |  
 |  The only potential issue here is that not a single 7.8 release will be
 |  able to bootstrap LLVM-only targets due to #9439. I'm not sure how
 |  much of an issue this will be in practice but there should probably be
 |  some discussion with packagers to ensure that 7.8 is skipped on
 |  affected platforms lest users be stuck with no functional stage 0
 |  compiler.
 |
 |  Cheers,
 |
 |  - Ben

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Tentative high-level plans for 7.10.1

2014-10-06 Thread John Lato
On Mon, Oct 6, 2014 at 5:38 PM, Johan Tibell johan.tib...@gmail.com wrote:

 On Mon, Oct 6, 2014 at 11:28 AM, Herbert Valerio Riedel 
 hvrie...@gmail.com wrote:

 On 2014-10-06 at 11:03:19 +0200, p.k.f.holzensp...@utwente.nl wrote:
  The danger, of course, is that people aren't very enthusiastic about
  bug-fixing older versions of a compiler, but for
  language/compiler-uptake, this might actually be a Better Way.

 Maybe some of the commercial GHC users might be interested in donating
 the manpower to maintain older GHC versions. It's mostly a
 time-consuming QA  auditing process to maintain old GHCs.


 What can we do to make that process cheaper? In particular, which are the
 manual steps in making a new GHC release today?


I would very much like to know this as well.  For ghc-7.8.3 there were a
number of people volunteering manpower to finish up the release, but to the
best of my knowledge those offers weren't taken up, which makes me think
that the extra overhead for coordinating more people would outweigh any
gains.  From the outside, it appears that the process/workflow could use
some improvement, perhaps in ways that would make it simpler to divide up
the workload.

John L.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Tentative high-level plans for 7.10.1

2014-10-05 Thread John Lato
Speaking as a user, I think Johan's concern is well-founded.  For us,
ghc-7.8.3 was the first of the 7.8 line that was really usable in
production, due to #8960 and other bugs.  Sure, that can be worked around
in user code, but it takes some time for developers to locate the issues,
track down the bug, and implement the workaround.  And even 7.8.3 has some
bugs that cause minor annoyances (either ugly workarounds or intermittent
build failures that I haven't had the time to debug); it's definitely not
solid.  Similarly, 7.6.3 was the first 7.6 release that we were able to use
in production.  I'm particularly concerned about ghc-7.10 as the AMP means
there will be significant lag in identifying new bugs (since it'll take
time to update codebases for that major change).

For the curious, within the past few days we've seen all the following,
some multiple times, all so far intermittent:

 ghc: panic! (the 'impossible' happened)
 (GHC version 7.8.3.0 for x86_64-unknown-linux):
 kindFunResult ghc-prim:GHC.Prim.*{(w) tc 34d}

 ByteCodeLink.lookupCE
 During interactive linking, GHCi couldn't find the following symbol:
 some_mangled_name_closure

 ghc: mmap 0 bytes at (nil): Invalid Argument

 internal error: scavenge_one: strange object 2022017865

Some of these I've mapped to likely ghc issues, and some are fixed in HEAD,
but so far I haven't had an opportunity to put together reproducible test
cases.  And that's just bugs that we haven't triaged yet, there are several
more for which workarounds are in place.

John L.

On Sat, Oct 4, 2014 at 2:54 PM, Johan Tibell johan.tib...@gmail.com wrote:

 On Fri, Oct 3, 2014 at 11:35 PM, Austin Seipp aus...@well-typed.com
 wrote:

  - Cull and probably remove the 7.8.4 milestone.
- Simply not enough time to address almost any of the tickets
  in any reasonable timeframe before 7.10.1, while also shipping them.
- Only one, probably workarouadble, not game-changing
  bug (#9303) marked for 7.8.4.
- No particular pressure on any outstanding bugs to release
 immediately.
- ANY release would be extremely unlikely, but if so, only
  backed by the most critical of bugs.
- We will move everything in 7.8.4 milestone to 7.10.1 milestone.
  - To accurately catalogue what was fixed.
  - To eliminate confusion.


 #8960 looks rather serious and potentially makes all of 7.8 a no-go for
 some users. I'm worried that we're (in general) pushing too many bug fixes
 towards future major versions. Since major versions tend to add new bugs,
 we risk getting into a situation where no major release is really solid.


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Build time regressions

2014-10-01 Thread John Lato
Hi Simon,

Thanks for replying.  Unfortunately the field in question wasn't being
unpacked, so there's something else going on.  But there's a decent chance
Richard has already fixed the issue; I'll check and report back if the
problem persists.  Unfortunately it may take me a couple days before I have
time to investigate fully.

However, I agree with your suggestion that GHC should not unpack wide
strict constructors without an explicit UNPACK pragma.

John

On Wed, Oct 1, 2014 at 4:57 PM, Simon Peyton Jones simo...@microsoft.com
wrote:

  It sounds as if there are two issues here:



 · *Should GHC unpack a !’d constructor argument if the
 constructor’s argument has a lot of fields?  *It probably isn’t
 profitable to unbox very large products, because it doesn’t save much
 allocation, and might *cause* extra allocation at pattern-match sites.
 So I think the answer is yes.  I’ll open a ticket.



 · *Is some library (binary? blaze?) creating far too much code in
 some circumstances?*  I have no idea about this, but it sounds fishy.
 Simply creating the large worker function should not make things go bad.



 Incidentally, John, using {-# NOUNPACK #-} !Bar would prevent the
 unpacking while still allowing the field to be strict.  It’s manually
 controllable.



 Simon







 *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *John
 Lato
 *Sent:* 01 October 2014 00:45
 *To:* Edward Z. Yang
 *Cc:* Joachim Breitner; ghc-devs@haskell.org
 *Subject:* Re: Build time regressions



 Hi Edward,



 This is possibly unrelated, but the setup seems almost identical to a very
 similar problem we had in some code, i.e. very long compile times (6+
 minutes for 1 module) and excessive memory usage when compiling generic
 serialization instances for some data structures.



 In our case, I also thought that INLINE functions were the cause of the
 problem, but it turns out they were not.  We had a nested data structure,
 e.g.



  data Foo { fooBar :: !Bar, ... }



 with Bar very large (~150 records).



 even when we explicitly NOINLINE'd the function that serialized Bar, GHC
 still created a very large helper function of the form:



  serialize_foo :: Int# - Int#  - ...



 where the arguments were the unboxed fields of the Bar structure, along
 with the other fields within Foo.  It appears that even though the
 serialization function was NOINLINE'd, it simply created a Builder, and
 while combining the Builder's ghc saw the full structure.  Our serializer
 uses blaze, but perhaps Binary's builder is similar enough the same thing
 could happen.



 Anyway, in our case the fix was to simply remove the bang pattern from the
 'fooBar' record field.  Then the serialize_foo function takes a Bar as an
 argument and serializes that.  I'm not entirely sure why compilation takes
 so much longer otherwise.  I've tried dumping the output of each simplifier
 phase and it clearly gets stuck at a certain point, but I didn't really
 debug in much detail so I don't recall the details.



 If you think this is related, I can investigate more thoroughly.



 Cheers,

 John L.



 On Wed, Oct 1, 2014 at 4:54 AM, Edward Z. Yang ezy...@mit.edu wrote:

 Hello Joachim,

 This was halfway known, but it sounds like we haven't solved
 it completely.

 The beginning of the sordid tale was when Cabal HEAD switched
 to using derived binary instances:
 https://ghc.haskell.org/trac/ghc/ticket/9583

 SPJ fixed the infinite loop bug in the simplifier, but apparently
 the deriving binary generates a lot of code, meaning a lot of
 memory. https://ghc.haskell.org/trac/ghc/ticket/9630
 hvr's fix was specifically to solve this problem.

 But it sounds like it didn't eliminate the regression entirely?
 If there's an unrelated regression, we should suss it out.  It would
 be helpful if someone could revert just the deriving changes,
 and see if this reverts the compilation time.

 Edward

 Excerpts from Joachim Breitner's message of 2014-09-30 13:36:27 -0700:
  Hi,
 
  the attached graph shows a noticable increase in build time caused by
 
  Update Cabal submodule  ghc-pkg to use new module re-export types
  authorEdward Z. Yang ezy...@cs.stanford.edu
 
 https://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae
 
  and only halfway mitigated by
 
  Update `binary` submodule in an attempt to address #9630
  authorHerbert Valerio Riedel h...@gnu.org
 
 https://git.haskell.org/ghc.git/commit/3ecca02516af5de803e4ff667c8c969c5bffb35f
 
 
  I am not sure if the improvement is related to the regression, but in
  any case: Edward, was such an increase expected by you? If not, can you
  explain it? Can it be avoided?
 
  Or maybe Cabal just became much larger... +38% in allocations when
  running haddock on it seems to confirm this.
 
  Greetings,
  Joachim
 
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

Re: Build time regressions

2014-09-30 Thread John Lato
Hi Edward,

This is possibly unrelated, but the setup seems almost identical to a very
similar problem we had in some code, i.e. very long compile times (6+
minutes for 1 module) and excessive memory usage when compiling generic
serialization instances for some data structures.

In our case, I also thought that INLINE functions were the cause of the
problem, but it turns out they were not.  We had a nested data structure,
e.g.

 data Foo { fooBar :: !Bar, ... }

with Bar very large (~150 records).

even when we explicitly NOINLINE'd the function that serialized Bar, GHC
still created a very large helper function of the form:

 serialize_foo :: Int# - Int#  - ...

where the arguments were the unboxed fields of the Bar structure, along
with the other fields within Foo.  It appears that even though the
serialization function was NOINLINE'd, it simply created a Builder, and
while combining the Builder's ghc saw the full structure.  Our serializer
uses blaze, but perhaps Binary's builder is similar enough the same thing
could happen.

Anyway, in our case the fix was to simply remove the bang pattern from the
'fooBar' record field.  Then the serialize_foo function takes a Bar as an
argument and serializes that.  I'm not entirely sure why compilation takes
so much longer otherwise.  I've tried dumping the output of each simplifier
phase and it clearly gets stuck at a certain point, but I didn't really
debug in much detail so I don't recall the details.

If you think this is related, I can investigate more thoroughly.

Cheers,
John L.

On Wed, Oct 1, 2014 at 4:54 AM, Edward Z. Yang ezy...@mit.edu wrote:

 Hello Joachim,

 This was halfway known, but it sounds like we haven't solved
 it completely.

 The beginning of the sordid tale was when Cabal HEAD switched
 to using derived binary instances:
 https://ghc.haskell.org/trac/ghc/ticket/9583

 SPJ fixed the infinite loop bug in the simplifier, but apparently
 the deriving binary generates a lot of code, meaning a lot of
 memory. https://ghc.haskell.org/trac/ghc/ticket/9630
 hvr's fix was specifically to solve this problem.

 But it sounds like it didn't eliminate the regression entirely?
 If there's an unrelated regression, we should suss it out.  It would
 be helpful if someone could revert just the deriving changes,
 and see if this reverts the compilation time.

 Edward

 Excerpts from Joachim Breitner's message of 2014-09-30 13:36:27 -0700:
  Hi,
 
  the attached graph shows a noticable increase in build time caused by
 
  Update Cabal submodule  ghc-pkg to use new module re-export types
  authorEdward Z. Yang ezy...@cs.stanford.edu
 
 https://git.haskell.org/ghc.git/commit/4b648be19c75e6c6a8e6f9f93fa12c7a4176f0ae
 
  and only halfway mitigated by
 
  Update `binary` submodule in an attempt to address #9630
  authorHerbert Valerio Riedel h...@gnu.org
 
 https://git.haskell.org/ghc.git/commit/3ecca02516af5de803e4ff667c8c969c5bffb35f
 
 
  I am not sure if the improvement is related to the regression, but in
  any case: Edward, was such an increase expected by you? If not, can you
  explain it? Can it be avoided?
 
  Or maybe Cabal just became much larger... +38% in allocations when
  running haddock on it seems to confirm this.
 
  Greetings,
  Joachim
 
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Why isn't ($) inlining when I want?

2014-08-27 Thread John Lato
I sometimes think the solution is to make let-floating apply in fewer
cases.  I'm not sure we ever want to float out intermediate lists, the cost
of creating them is very small relative to the memory consumption if they
do happen to get shared.

My approach is typically to mark loop INLINE.  This very often results in
the code I want (with vector, which I use more than lists), but it is a big
hammer to apply.

John


On Thu, Aug 28, 2014 at 5:56 AM, Dan Doel dan.d...@gmail.com wrote:

 I think talking about inlining of $ may not be addressing the crux of the
 problem here.

 The issue seems to be about functions like the one in the first message.
 For instance:

 loop :: (Int - Int) - Int
 loop g = sum . map g $ [1..100]

 Suppose for argument that we have a fusion framework that would handle
 this. The problem is that this does not actually turn into a loop over
 integers, because the constant [1..100] gets floated out. It instead
 builds a list/vector/whatever.

 By contrast, if we write:

 loop' :: Int
 loop' = sum . map (+1) $ [1..100]

 this does turn into a loop over integers, with no intermediate list.
 Presumably this is due to there being no work to be saved ever by floating
 the list out. These are the examples people usually test fusion with.

 And if loop is small enough to inline, it turns out that the actual code
 that gets run will be the same as loop', because everything will get
 inlined and fused. But it is also possible to make loop big enough to not
 inline, and then the floating will pessimize the overall code.

 So the core issue is that constant floating blocks some fusion
 opportunities. It is trying to save the work of building the structure more
 than once, but fusion can cause the structure to not be built at all. And
 the floating happens before fusion can reasonably be expected to work.

 Can anything be done about this?

 I've verified that this kind of situation also affects vector. And it
 seems to be an issue even if loop is written:

 loop g = sum (map g [1..100])

 -- Dan


 On Wed, Aug 27, 2014 at 3:38 PM, Simon Peyton Jones simo...@microsoft.com
  wrote:

 You'll have to do more detective work! In your dump I see Inactive
 unfolding $.  So that's why it's not being inlined.  That message comes
 from CoreUnfold, line 941 or so.  The Boolean active_unfolding is passed in
 to callSiteInline from Simplify, line 1408 or so.  It is generated by the
 function activeUnfolding, defined in SimplUtils.

 But you have probably change the CompilerPhase data type, so I can't
 guess what is happening.  But if you just follow it through I'm sure you'll
 find it.

 Simon

 | -Original Message-
 | From: David Feuer [mailto:david.fe...@gmail.com]
 | Sent: 27 August 2014 17:22
 | To: Simon Peyton Jones
 | Cc: ghc-devs
 | Subject: Re: Why isn't ($) inlining when I want?
 |
 | I just ran that (results attached), and as far as I can tell, it
 | doesn't even *consider* inlining ($) until phase 2.
 |
 | On Wed, Aug 27, 2014 at 4:03 AM, Simon Peyton Jones
 | simo...@microsoft.com wrote:
 |  It's hard to tell since you are using a modified compiler.  Try
 running
 | with -ddump-occur-anal -dverbose-core2core -ddump-inlinings.  That will
 | show you every inlining, whether failed or successful. You can see the
 | attempt to inline ($) and there is some info with the output that may
 | help to explain why it did or did not work.
 | 
 |  Try that
 | 
 |  Simon
 | 
 |  | -Original Message-
 |  | From: ghc-devs [mailto:ghc-devs-boun...@haskell.org] On Behalf Of
 | David
 |  | Feuer
 |  | Sent: 27 August 2014 04:50
 |  | To: ghc-devs; Carter Schonwald
 |  | Subject: Why isn't ($) inlining when I want?
 |  |
 |  | tl;dr  I added a simplifier run with inlining enabled between
 |  | specialization and floating out. It seems incapable of inlining
 |  | saturated applications of ($), and I can't figure out why. These are
 |  | inlined later, when phase 2 runs. Am I running the simplifier wrong
 | or
 |  | something?
 |  |
 |  |
 |  | I'm working on this simple little fusion pipeline:
 |  |
 |  | {-# INLINE takeWhile #-}
 |  | takeWhile p xs = build builder
 |  |   where
 |  | builder c n = foldr go n xs
 |  |   where
 |  | go x r = if p x then x `c` r else n
 |  |
 |  | foo c n x = foldr c n . takeWhile (/= (1::Int)) $ [-9..10]
 |  |
 |  | There are some issues with the enumFrom definition that break
 things.
 |  | If I use a fusible unfoldr that produces some numbers instead, that
 |  | issue goes away. Part of that problem (but not all of it) is that
 the
 |  | simplifier doesn't run to apply rules between specialization and
 full
 |  | laziness, so there's no opportunity for the specialization of
 |  | enumFromTo to Int to trigger the rewrite to a build form and fusion
 |  | with foldr before full laziness tears things apart. The other
 problem
 |  | is that inlining doesn't happen at all before full laziness, so
 | things
 |  | defined 

Re: Changing the -package dependency resolution algorithm

2014-07-24 Thread John Lato
How would this work with ghci?  If I'm understanding correctly, the
proposal means users could no longer do:

  $ ghci SomeFile.hs

and have it work without manually specifying all -package flags.  Did I
miss something?

I think it would work in conjuction with the package environments stuff,
provided that were available on all platforms ghc supports.

John L.


On Thu, Jul 24, 2014 at 11:12 PM, Edward Z. Yang ezy...@mit.edu wrote:

 Excerpts from Edward Z. Yang's message of 2014-07-24 15:57:05 +0100:
  - It assumes *-hide-all-packages* at the beginning.  This scheme
probably works less well without that: now we need some consistent
view of the database to start with.

 Actually, thinking about this, this dovetails nicely with the package
 environments work the IHG is sponsoring.  The idea behind a package
 environment is you specify some set of installed package IDs, which
 serves as the visible slice of the package database which is used for
 compilation.  Then, ghc called without any arguments is simply using
 the *default* package environment.

 Furthermore, when users install packages, they may or may not decide
 to add the package to their global environment, and they can be informed
 if the package is inconsistent with a package that already is in their
 environment (mismatched dependencies).  A user can also request to
 upgrade a package in their environment, and Cabal could calculate how
 all the other packages in the environment would need to be upgraded
 in order to keep the environment consistent, and run this plan for the
 user.

 Cheers,
 Edward
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: a little phrustrated

2014-07-16 Thread John Lato
Speaking more as a bystander than anything else, I'd recommend that the ghc
dev team just go ahead and detab files. Yes, merging branches will be
painful, but it's a one-time pain. Better that than the ongoing pain mixed
tabs and spaces seem to be causing.
And merging doesn't even have to be that painful. Just cherry-pick the
detab commit into your wip branch, if there are any conflicts resolve them
to your branch, detab again and commit. It could be completely automated.

John L.

On Jul 16, 2014 6:54 AM, Richard Eisenberg e...@cis.upenn.edu wrote:

 Hi all,

 I'm trying to use Phab for the first time this morning, and hitting a
fair number of obstacles. I'm writing up my experiences here in order to
figure out which of these are my fault, which can be fixed, and which are
just things to live with; and also to help others who may go down the same
path. If relevant, my diff is at https://phabricator.haskell.org/D73

 1) I had some untracked files in a submodule repo. I couldn't find a way
to get `arc diff` to ignore these, as they appeared to git to be a change
in a tracked file (that is, a change to a submodule, which is considered
tracked). `git stash` offered no help, so I had to delete the untracked
files. This didn't cause real pain (the files were there in error), but it
seems a weakness of the system if I can't make progress otherwise.

 2) I develop and build in the same tree. This means that I often have a
few untracked files in the outer, ghc.git repo that someone hasn't yet
added to .gitignore. Thus, I need to say `--allow-untracked` to get `arc
diff` to work. I will likely always need `--allow-untracked`, so I looked
for a way to get this to be configured automatically. I found
https://secure.phabricator.com/book/phabricator/article/arcanist/#configuration
,
but the details there are sparse. Any advice?

 3) The linter picks up and complains about tabs in any of my touched
files. I can then write an excuse for every `arc diff` I do, or de-tab the
files. In one case, I changed roughly one line in the file (MkCore.lhs) and
didn't think it right to de-tab the whole file. Even if I did de-tab the
whole file, then my eventual `arc land` would squash the whitespace commit
in with my substantive commits, which we expressly don't want. I can
imagine a fair amount of git fiddling which would push the whitespace
commit to master and then rebase my substantive work on top so that the
final, landed, squashed patch would avoid the whitespace changes, but this
is painful. And advice on this? Just ignore the lint errors and write silly
excuses? Or, is there a way Phab/arc can be smart enough to keep
whitespace-only commits (perhaps tagged with the words whitespace only in
the commit message) separate from other commits when squashing in `arc
land`?

 4) For better or worse, we don't currently require every file to be
tab-free, just some of them. Could this be reflected in Phab's lint
settings to avoid the problem in (3)? (Of course, a way to de-tab and keep
the history nice would be much better!)

 5) In writing my revision description, I had to add reviewers. I assumed
these should be comma-separated. This worked and I have updated the Wiki.
Please advise if I am wrong.

 6) When I looked at my posted revision, it said that the revision was
closed... and that I had done it! slyfox on IRC informed me that this was
likely because I had pushed my commits to a wip/... branch. Is using wip
branches with Phab not recommended? Or, can Phab be configured not to close
revisions if the commit appears only in wip/... branches?

 7) How can I re-open my revision?

 8) Some time after posting, phaskell tells me that my build failed. OK.
This is despite the fact that Travis was able to build the same commit (
https://travis-ci.org/ghc/ghc/builds/30066130). I go to find out why it
failed, and am directed to build log F3870 (
https://phabricator.haskell.org/file/info/PHID-FILE-hz2r4sjamkkrbf7nsz6b/).
I can't view the file online, but instead have to download and then ungzip
it. Is it possible to view this file directly? Or not have it be compressed?

 9) When I do view the build log, I get no answers. The end of the file
comes abruptly in the middle of some haddock output, and the closest thing
that looks like an error is about a missing link in a haddock tag
`$kind_subtyping` in Type.lhs. I didn't touch this file, and I imagine the
missing link has been there for some time, so I'm dubious that this is the
real problem. Are these log files cut off?

 10) More of a question than a phrustration: is there a way to link
directly to Trac tickets and/or wiki pages from Phab comments? I like the
Phab:D73 syntax from Trac to Phab, and thanks, Austin, for adding the field
at the top of Trac tickets to Phab revisions.


 I did fully expect to hit a few bumps on my first use of this new tool,
but it got to the point where I thought I should seek some advice before
continuing to muddle through -- hence this email. I do hope my tone is 

Re: GHC silently turns off dynamic output, should this be an error?

2014-06-24 Thread John Lato
On Jun 24, 2014 1:36 PM, Ian Lynagh ig...@earth.li wrote:

 On Mon, Jun 23, 2014 at 12:58:16PM -0500, Christopher Rodrigues wrote:
 
  Additionally, is it ever valid to have a pair of .hi and .dyn_hi files
with
  different interface hashes?

 You can, for example, compile package foo the vanilla way with -O, and
 the dynamic way without -O. You'll then get mismatched hashes.

Also, any file that uses Template Haskell could potentially generate
different code at each compilation.  But IMHO this isn't something ghc
should try to support.

John L.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Proposal: GHC.Generics marked UNSAFE for SafeHaskell

2013-10-07 Thread John Lato
Also, I'm not really sure this is a bug, per se.  In my opinion when you
allow for some sort of generic operations (whether via GHC.Generics or
Data), it's roughly equivalent to exporting all the constructors.  I don't
see how it would work otherwise.

But since there's precedent for Typeable, maybe Generics should be
restricted in SafeHaskell as well.


On Mon, Oct 7, 2013 at 1:05 AM, John Lato jwl...@gmail.com wrote:

 Well, no.  Presumably the example shouldn't compile at all.  That message
 is more an indication that the demonstration is working as intended.

 On Mon, Oct 7, 2013 at 12:31 AM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 i assume 
 https://github.com/JohnLato/safe-bugtest/blob/master/Main.hs#L13should say
 putStrLn Should print \Pos (2)\

 rather than -2?


 On Mon, Oct 7, 2013 at 1:23 AM, Carter Schonwald 
 carter.schonw...@gmail.com wrote:

 ooo, thats illuminating.

 thanks for cooking that up


 On Mon, Oct 7, 2013 at 1:13 AM, John Lato jwl...@gmail.com wrote:

 On Sun, Oct 6, 2013 at 10:14 PM, Ryan Newton rrnew...@gmail.comwrote:


 On Sun, Oct 6, 2013 at 6:28 PM, Ganesh Sittampalam gan...@earth.liwrote:

  - Referential transparency: e.g. no unsafePerformIO

  - Module boundary control: no abstraction violation like Template
 Haskell and GeneralizedNewtypeDeriving
  - Semantic consistency: importing a safe module can't change existing
 code, so no OverlappingInstances and the like

 Is this change necessary to preserve the existing properties, or are
 you
 hoping to add a new one?


 I'm not currently aware of ways to break these invariants *just* with
 GHC.Generics.  Hmm, but I would like to know why it is marked trustworthy
 and not inferred-safe...


 How about this demo repo? https://github.com/JohnLato/safe-bugtest

 I'm really not a safe haskell expert, but I believe this is a
 demonstration of using GHC.Generics to violate a module's abstraction
 boundaries with SafeHaskell enabled.

 If I'm incorrect, I would appreciate if somebody could explain my
 error.  If, however, I'm correct, then I think that Ryan's proposal of
 marking GHC.Generics Unsafe is the best way to remedy the problem.

 A possible stumbling block may involve base and package-trust, but I'm
 not certain of the current status.

 John L.

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs





___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Proposal: GHC.Generics marked UNSAFE for SafeHaskell

2013-10-07 Thread John Lato
Andres is right that it's not as evil as defining your own Typeable.  The
crux of the matter is that Generic essentially allows full access to the
data type.  Unfortunately it's easy to forget this...


On Mon, Oct 7, 2013 at 1:43 AM, Andres Löh and...@well-typed.com wrote:

 While I understand you all feel uncomfortable with this, I do not
 think the problem demonstrated by John has anything to do with
 Generic.

 I've made a fork here

 https://github.com/kosmikus/safe-bugtest

 that shows an (IMHO) similar problem using Show and Read instead of
 Generic. (And something slightly different could certainly also
 produced using Enum).

 If you're deriving Generic, then yes, you gain the functionality of
 that class, which is to inspect the structure of the type, and then
 yes, you can in principle construct any value of that type. So
 deriving Generic for types that should be abstract is always going to
 be risky. But this is no different than deriving any other class, only
 that Generic gives you particularly fine-grained access to the
 internals of a type.

 Also, at least in my opinion, it is entirely valid to define your own
 Generic instances. It's more work, and while I haven't used it often
 so far, I can imagine that there are good use cases. I don't think
 it's anywhere near as evil as defining your own Typeable instances.

 Cheers,
   Andres

 --
 Andres Löh, Haskell Consultant
 Well-Typed LLP, http://www.well-typed.com

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Proposal: GHC.Generics marked UNSAFE for SafeHaskell

2013-10-06 Thread John Lato
On Sun, Oct 6, 2013 at 10:14 PM, Ryan Newton rrnew...@gmail.com wrote:


 On Sun, Oct 6, 2013 at 6:28 PM, Ganesh Sittampalam gan...@earth.liwrote:

  - Referential transparency: e.g. no unsafePerformIO

  - Module boundary control: no abstraction violation like Template
 Haskell and GeneralizedNewtypeDeriving
  - Semantic consistency: importing a safe module can't change existing
 code, so no OverlappingInstances and the like

 Is this change necessary to preserve the existing properties, or are you
 hoping to add a new one?


 I'm not currently aware of ways to break these invariants *just* with
 GHC.Generics.  Hmm, but I would like to know why it is marked trustworthy
 and not inferred-safe...


How about this demo repo? https://github.com/JohnLato/safe-bugtest

I'm really not a safe haskell expert, but I believe this is a demonstration
of using GHC.Generics to violate a module's abstraction boundaries with
SafeHaskell enabled.

If I'm incorrect, I would appreciate if somebody could explain my error.
If, however, I'm correct, then I think that Ryan's proposal of marking
GHC.Generics Unsafe is the best way to remedy the problem.

A possible stumbling block may involve base and package-trust, but I'm not
certain of the current status.

John L.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: llvm calling convention matters

2013-09-20 Thread John Lato
I think Geoffrey's suggestion is workable, but I wouldn't like it so much.
The difficulty is that if you aren't aware of it, it's possible to build up
a large system that you discover is unworkable the first time you try to
compile it.

In that sense it's not too different from Template Haskell staging
restrictions, which IIRC exist to deal with exactly this problem.

As the necessity of compile-time constants has shown up at least twice, a
more principled solution is worth investigating.

In the meantime, a horrible hack would be something like:

newtype CStatic v = CStatic ExpQ
instance Num a = Num ( CStatic a) where
  fromInteger x = [| x |]

and then the value could be spliced at the call site. Staging restrictions
would ensure it's available at compile time. I guess the instance decl
needs Lift too.

Downsides are general hideous-ness, misleading error messages, and the
necessity for compiling with template haskell. But you'd get an error if s
value isn't a compile-time constant. (Please don't put this in ghc, but
it's not so bad as a separate lib).

John L.
On Sep 19, 2013 3:44 PM, Geoffrey Mainland mainl...@apeiron.net wrote:

 If you pass a constant, unboxed value to a primop, I assume GHC won't
 ever bind the argument to a value. So although we have no way to enforce
 static const argument in the type system, if this is a valuable (and
 experts-only?) operation, I'm not sure it matters that much if the user
 gets an error at code-generation time complaining about non-const
 arguments.

 Another way to look at it: if we wait until someone enhances the type
 system to support the notion of static arguments, we will likely never
 have a bit shuffle primitive.

 The other option would be to fall back on a different implementation if
 we saw a non-constant argument. I think that would actually be worse
 than erroring out, but I'm sure others would disagree.

 Geoff

 On 09/19/2013 11:42 AM, Carter Schonwald wrote:
  tldr; we can't express / expose the LLVM shuffleVector intrinsic in a
  type safe way that will correctly interact with the static argument
  requirement for associated code generation.
 
 
 
 
  On Thu, Sep 19, 2013 at 12:40 AM, Carter Schonwald
  carter.schonw...@gmail.com mailto:carter.schonw...@gmail.com wrote:
 
  yup, i hit a gap in what we can currently express in haskell
  types. We don't have a way of expressing static data! I actually
  put ticket on trac noting
  this. http://ghc.haskell.org/trac/ghc/ticket/8107
  (note that when i was initially writing the ticket, i incorrectly
  thought the int# arg to ghc's prefetch was the locality level
  rather than a byte offset)
 
  Currently GHC has no way of expressing this argument needs to be
  a static compile/codegen time constant in surface haskell or
  core! This means we could at best provide a suite of special cased
  operations. (eg: we could provide the inter-lane shuffle for
  swapping halves of YMM registers, and the miniature analogue for
  XMM), but that would really be missing the point: being able to
  write complex algorithms that can work completely in registers!
 
  the vast majority of the simd shuffle operations have certain
  arguments that need to be compile time static values that are used
  in the actual code generation. The llvm data model doesn't express
  this constraint. This invariant failure was also hit internally
  recently  via a bug in how GHC generated code  for llvm's
  memcopy! http://ghc.haskell.org/trac/ghc/ticket/8131
 
  If we could express llvm'sshuffleVector
  http://llvm.org/docs/LangRef.html#shufflevector-instruction
  intrinsic in a type safe way, then we could express any of them. I
  would be over the moon if we could expose an operation like
  shuffleVector, but I dont' think GHC currently can express it in a
  type safe way that won't make LLVM vomit.
 
  I want simd shuffle, but i don't see how to give the fully general
  shuffle operations in type safe ways with ghc currently. We need
  to add support for some notion of static data first! If theres a
  way, i'm all for it, but I don't see such a way.
 
  I hope that answers your question. that seems to be a deep enough
  issue that theres no way to resolve it with simple engineering in
  the next few weeks.
 
  -Carter
 
 
 
 
  On Wed, Sep 18, 2013 at 9:41 PM, Geoffrey Mainland
  mainl...@apeiron.net mailto:mainl...@apeiron.net wrote:
 
  On 09/18/2013 04:49 PM, Carter Schonwald wrote:
   I've some thoughts on how to have a better solution, but
  they are
   feasible only on a time scale suitable for 7.10, and not for
  7.8.
  
   a hacky solution we could do for 7.8 perhaps is have a
  warning that
   works as follows:
  
   either
   a)
   throw a warning on functions that 

Re: Proposal: better library management ideas (was: how to checkout proper submodules)

2013-06-10 Thread John Lato
On Mon, Jun 10, 2013 at 1:32 PM, Roman Cheplyaka r...@ro-che.info wrote:

 * John Lato jwl...@gmail.com [2013-06-10 07:59:55+0800]
  On Mon, Jun 10, 2013 at 1:32 AM, Roman Cheplyaka r...@ro-che.info
 wrote:
 
  
   What I'm trying to say here is that there's hope for a portable base.
   Maybe not in the form of split base — I don't know.
   But it's the direction we should be moving anyways.
  
   And usurping base by GHC is a move in the opposite direction.
 
 
  Maybe that's a good thing?  The current situation doesn't really seem to
 be
  working.  Keeping base separate negatively impacts workflow of GHC devs
 (as
  evidenced by these threads), just to support something that other
 compilers
  don't use anyway.  Maybe it would be easier to fold base back into ghc
 and
  try again, perhaps after some code cleanup?  Having base in ghc may
 provide
  more motivation to separate it properly.

 After base is in GHC, separating it again will be only harder, not
 easier. Or do you have a specific plan in mind?


It's more about motivation.  It seems to me right now base is in a halfway
state.  People think that moving it further away from ghc is The Right
Thing To Do, but nobody is feeling enough pain to be sufficiently motivated
to do it.  If we apply pain, then someone will be motivated to do it
properly.  And if nobody steps up, maybe having a platform-agnostic base
isn't really very important.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Proposal: better library management ideas (was: how to checkout proper submodules)

2013-06-09 Thread John Lato
On Mon, Jun 10, 2013 at 1:32 AM, Roman Cheplyaka r...@ro-che.info wrote:


 What I'm trying to say here is that there's hope for a portable base.
 Maybe not in the form of split base — I don't know.
 But it's the direction we should be moving anyways.

 And usurping base by GHC is a move in the opposite direction.


Maybe that's a good thing?  The current situation doesn't really seem to be
working.  Keeping base separate negatively impacts workflow of GHC devs (as
evidenced by these threads), just to support something that other compilers
don't use anyway.  Maybe it would be easier to fold base back into ghc and
try again, perhaps after some code cleanup?  Having base in ghc may provide
more motivation to separate it properly.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC 7.8 release?

2013-02-10 Thread John Lato
While I'm notionally in favor of decoupling API-breaking changes from
non-API breaking changes, there are two major difficulties: GHC.Prim and
Template Haskell. Should a non-API-breaking change mean that GHC.Prim is
immutable?  If so, this greatly restricts GHC's development.  If not, it
means that a large chunk of hackage will become unbuildable due to deps on
vector and primitive.  With Template Haskell the situation is largely
similar, although the deps are different.

What I would like to see are more patch-level bugfix releases.  I suspect
the reason we don't have more is that making a release is a lot of work.
 So, Ian, what needs to happen to make more frequent patch releases
feasible?



On Mon, Feb 11, 2013 at 7:42 AM, Carter Schonwald 
carter.schonw...@gmail.com wrote:

 Well said. Having a more aggressive release cycle is another interesting
 perspective.
 On Feb 10, 2013 6:21 PM, Gabriel Dos Reis g...@integrable-solutions.net
 wrote:

 On Sun, Feb 10, 2013 at 3:16 PM, Ian Lynagh i...@well-typed.com wrote:
  On Sun, Feb 10, 2013 at 09:02:18PM +, Simon Peyton-Jones wrote:
 
  You may ask what use is a GHC release that doesn't cause a wave of
 updates?  And hence that doesn't work with at least some libraries.  Well,
 it's a very useful forcing function to get new features actually out and
 tested.
 
  But the way you test new features is to write programs that use them,
  and programs depend on libraries.
 
 
  Thanks
  Ian

 Releasing GHC early and often (possibly with API breakage) isn't
 really the problem.  The real problem is how to coordinate with
 library authors (e.g. Haskell Platform), etc.

 I suspect GHC should continue to offer a platform for research
 and experiments. That is much harder if you curtail the ability to
 release GHC early and often.

 -- Gaby

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs