Re: enumFromThenTo for Doubles

2016-08-09 Thread Andrew Farmer
Turns out the accumulated error is even worse:

Prelude> let old x y z = let eftt i j = i : eftt j (j+j-i) in let d =
y - x in maximum $ takeWhile (<= z + d) $ eftt x y
Prelude> old 0.0 0.1 86400.0
86400.005062
Prelude> let new x y z = let d = y - x in let go i = i : go (i + d) in
maximum $ takeWhile (<= z + d) $ go x
Prelude> new 0.0 0.1 86400.0
86400.0054126

Sorry to spam the list. :-P Floating point is hard.

On Tue, Aug 9, 2016 at 8:22 PM, Andrew Farmer <xicheko...@gmail.com> wrote:
> Noticed this today:
>
> ghci> let xs = [0.0,0.1 .. 86400.0] in maximum xs
> 86400.005062
>
> enumFromThenTo is implemented by numericEnumFromThenTo:
>
> https://github.com/ghc/ghc/blob/a90085bd45239fffd65c01c24752a9bbcef346f1/libraries/base/GHC/Real.hs#L227
>
> Which probably accumulates error in numericEnumFromThen with the (m+m-n):
>
> numericEnumFromThen n m = n `seq` m `seq` (n : numericEnumFromThen m (m+m-n))
>
> Why not define numericEnumFromThen as:
>
> numericEnumFromThen n m = let d = m - n in d `seq` go d n
> where go delta x = x `seq` (x : go delta (x + delta))
>
> (or with BangPatterns)
>
> numericEnumFromThen n m = go (m - n) n
> where go !delta !x = x : go delta (x + delta)
>
> Seems like we'd save a lot of subtractions by using the worker function.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


enumFromThenTo for Doubles

2016-08-09 Thread Andrew Farmer
Noticed this today:

ghci> let xs = [0.0,0.1 .. 86400.0] in maximum xs
86400.005062

enumFromThenTo is implemented by numericEnumFromThenTo:

https://github.com/ghc/ghc/blob/a90085bd45239fffd65c01c24752a9bbcef346f1/libraries/base/GHC/Real.hs#L227

Which probably accumulates error in numericEnumFromThen with the (m+m-n):

numericEnumFromThen n m = n `seq` m `seq` (n : numericEnumFromThen m (m+m-n))

Why not define numericEnumFromThen as:

numericEnumFromThen n m = let d = m - n in d `seq` go d n
where go delta x = x `seq` (x : go delta (x + delta))

(or with BangPatterns)

numericEnumFromThen n m = go (m - n) n
where go !delta !x = x : go delta (x + delta)

Seems like we'd save a lot of subtractions by using the worker function.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Why upper bound version numbers?

2016-06-07 Thread Andrew Farmer
Aforementioned versioning policy:
https://wiki.haskell.org/Package_versioning_policy

On Mon, Jun 6, 2016 at 5:58 PM, Dominick Samperi  wrote:
> I guess what you are saying is that this policy will prevent packages
> from installing with new versions of ghc until the maintainer has had
> a chance to test the package with the new version, and has updated the
> upper version limit. Thus, inserting those upper version limits is a
> kind of flag that indicates that the package has been "certified" for
> use with versions of base less than or equal to the upper limit.
>
> On Mon, Jun 6, 2016 at 8:22 PM, Brandon Allbery  wrote:
>> On Mon, Jun 6, 2016 at 8:19 PM, Dominick Samperi 
>> wrote:
>>>
>>> The odd thing about this is that to upper bound a package that you did
>>> not write (like base) you would have to know that incompatible changes
>>> were coming in subsequent revisions, or that features of the API that
>>> you rely on will be changed.
>>
>>
>> There is a versioning policy covering this. It has been found to be
>> necessary because otherwise people who try to build packages find themselves
>> with broken messes because of the assumption that any future version of a
>> package is guaranteed to be compatible.
>>
>> --
>> brandon s allbery kf8nh   sine nomine associates
>> allber...@gmail.com  ballb...@sinenomine.net
>> unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: lookupRdrNameInModuleForPlugins with constructors

2016-03-22 Thread Andrew Farmer
Er, dictionary... sorry, mkDataOccFS

On Tue, Mar 22, 2016 at 5:24 PM, Andrew Farmer <xicheko...@gmail.com> wrote:
> mkVarUnqual calls mkVarOccFS, which constructs an OccName in the
> varName namespace. You need to construct your RdrName via mkTyVarOcc,
> which picks the Type/Class namespace.
>
> On Tue, Mar 22, 2016 at 5:09 PM, Conal Elliott <co...@conal.net> wrote:
>> I'm trying to construct a dictionary in a GHC plugin. I'm stuck on finding
>> the constructor for the dictionary. When I use `-ddump-simpl` on the module
>> that defines the class, I see "Circat.Rep.C:HasRep". To try finding that
>> constructor, I say
>>
>>> lookupRdrNameInModuleForPlugins hsc_env
>>>   (mkModuleName "Circat.Rep") (mkVarUnqual "C:HasRep")
>>
>> However, I keep getting `Nothing` as a result. (Same without the "C:".) I've
>> also had this same difficulty when looking up constructors for algebraic
>> data types and when looking up TyCons. For regular value Ids, lookup
>> succeeds.
>>
>> What am I missing?
>>
>> Thanks, - Conal
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: lookupRdrNameInModuleForPlugins with constructors

2016-03-22 Thread Andrew Farmer
mkVarUnqual calls mkVarOccFS, which constructs an OccName in the
varName namespace. You need to construct your RdrName via mkTyVarOcc,
which picks the Type/Class namespace.

On Tue, Mar 22, 2016 at 5:09 PM, Conal Elliott  wrote:
> I'm trying to construct a dictionary in a GHC plugin. I'm stuck on finding
> the constructor for the dictionary. When I use `-ddump-simpl` on the module
> that defines the class, I see "Circat.Rep.C:HasRep". To try finding that
> constructor, I say
>
>> lookupRdrNameInModuleForPlugins hsc_env
>>   (mkModuleName "Circat.Rep") (mkVarUnqual "C:HasRep")
>
> However, I keep getting `Nothing` as a result. (Same without the "C:".) I've
> also had this same difficulty when looking up constructors for algebraic
> data types and when looking up TyCons. For regular value Ids, lookup
> succeeds.
>
> What am I missing?
>
> Thanks, - Conal
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [ANNOUNCE] Glasgow Haskell Compiler 8.0.1, release candidate 1

2016-01-13 Thread Andrew Farmer
I'm guessing this is the same problem with gmp on OS X, but I got it
during the ./configure step, so thought I'd pass along the error
message:

mba:ghc-8.0.0.20160111 xich$ ./configure --prefix=/Users/xich/projects/ghc8
checking for path to top of build tree... dyld: Library not loaded:
/usr/local/opt/gmp/lib/libgmp.10.dylib
  Referenced from:
/Users/xich/projects/ghc-8.0.0.20160111/utils/ghc-pwd/dist-install/build/tmp/ghc-pwd
  Reason: image not found
configure: error: cannot determine current directory

You made reference to the wiki build instructions... do you have a
link to the ones you mean? I can try following them.

On Wed, Jan 13, 2016 at 2:52 PM, Ben Gamari  wrote:
> George Colpitts  writes:
>
>> installs fine on mac but cabal install vector fails on primitive, looks to
>> me like gmp library is not provided
>>
>> cabal install vector
>> Resolving dependencies...
>> Configuring primitive-0.6.1.0...
>> Failed to install primitive-0.6.1.0
>> Build log ( /Users/gcolpitts/.cabal/logs/primitive-0.6.1.0.log ):
>> cabal: Error: some packages failed to install:
>> primitive-0.6.1.0 failed during the configure step. The exception was:
>> user error ('/usr/local/bin/ghc' exited with an error:
>> *ld: library not found for -lgmp*
>> collect2: error: ld returned 1 exit status
>> `gcc' failed in phase `Linker'. (Exit code: 1)
>> )
>> vector-0.11.0.0 depends on primitive-0.6.1.0 which failed to install.
>>
> The issue here appears to be that the build was compiled against the
> host's libgmp on account of the the Wiki build instructions [1] being out of
> date. I've fixed this particular inaccuracy, but it would be great if
> someone familiar with Darwin could take a close look at this page: do a
> build of their own, work through the build instructions, and ensure that
> the build instructions are still accurate, fixing any issues.
>
> George or Carter, perhaps one of you could pick this up?
>
> Cheers,
>
> - Ben
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: latest symlink

2016-01-05 Thread Andrew Farmer
Possibly related, when I use hoogle to look up GHC things ("+ghc
Unique", for instance) and try to click through to the haddocks, it
always 404s.

If I change the link from:

https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc/Unique.html#t:Unique

to:

https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc-7.10.3/Unique.html#t:Unique

Then it works.

So, is there a way to point:

https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc

to:

https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc-7.10.3

?

On Mon, Jan 4, 2016 at 5:32 PM, Ben Gamari  wrote:
> Joachim Breitner  writes:
>
>> Hi,
>>
>> https://downloads.haskell.org/~ghc/latest/ still points to
>> https://downloads.haskell.org/~ghc/7.10.2/ and not
>> https://downloads.haskell.org/~ghc/7.10.3/
>>
> So you are right. It should be fixed now.
>
> Cheers,
>
> - Ben
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Inlining of methods (dictionary accessors) in GHC 7.10?

2016-01-05 Thread Andrew Farmer
I *think* we found our answer here:

https://github.com/ghc/ghc/blob/2db18b8135335da2da9918b722699df684097be9/compiler/typecheck/TcInstDcls.hs#L158

On Tue, Jan 5, 2016 at 9:39 PM, Conal Elliott  wrote:
> Sorry for the editing error. I meant "Did something about change with ...".
>
> On Tue, Jan 5, 2016 at 9:38 PM, Conal Elliott  wrote:
>>
>> Did something about change with method inlining between GHC 7.8.2 and
>> 7.10.3? I don't mean methods attached to instances, but rather the method
>> name itself, which I understand is defined as simple field accessors into a
>> dictionary. I do inlining indirectly via HERMIT, and the method names are no
>> longer successfully inlining to the accessors. The dictionaries themselves
>> inline fine, as do field accessors for algebraic types with named fields.
>>
>> -- Conal
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: latest symlink

2016-01-05 Thread Andrew Farmer
Yeah, I had to ask for this once before and it broke again with a new
GHC version, so I figured it wasn't the right solution. I take it most
people don't use hoogle on the GHC codebase... but I find it useful to
search by type signature.

Thanks for the temporary fix though!

On Tue, Jan 5, 2016 at 2:48 PM, Ben Gamari <b...@well-typed.com> wrote:
> Andrew Farmer <xicheko...@gmail.com> writes:
>
>> Possibly related, when I use hoogle to look up GHC things ("+ghc
>> Unique", for instance) and try to click through to the haddocks, it
>> always 404s.
>>
> I've gone ahead and created a symlink for now. That being said, this
> seems like a very hacky solution.
>
> Moreover, I doubt that the database that the haskell.org Hoogle instance
> is using matches the current state of the code. We should probably try
> to update it.
>
> Cheers,
>
> - Ben
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: tyVarsOfTypeAcc

2015-12-03 Thread Andrew Farmer
Bartosz left a note in the diff about it being faster this way:

https://github.com/niteria/deterministic-fvs/blob/master/results.txt#L83-L89

But yeah, I would have also thought it better eta-reduced.

On Thu, Dec 3, 2015 at 10:43 AM, Richard Eisenberg  wrote:
> Hi devs,
>
> I'm (once again) merging master into my type=kind branch. I see that we now 
> have tyVarsOfTypeAcc :: Type -> FV, distinct from tyVarsOfType :: Type -> 
> TyVarSet. I trust that this new version is more performant. However, I have a 
> question: in the implementation of these functions, the three extra FV 
> parameters (fv_cand in_scope acc) are bound and passed each time.
>
> Why?
>
> I've always understood that eta-reducing in function definitions is better 
> than expanding. Or is this just a style choice?
>
> Thanks,
> Richard
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [DISARMED] RE: Better calling conventions for strict functions (bang patterns)?

2015-10-26 Thread Andrew Farmer
Simon,

I really enjoyed reading this paper... I was wondering if you could comment
on the implementation of Strict Core? Was it ever implemented in GHC (even
as a proof-of-concept)? If not, was it just due to a lack of time or some
fundamental limitation or problem discovered after the paper? If it was
implemented, was any benefit actually measured? Can you speculate on
whether some of the more recent changes/additions to Core (such as
coercions and roles) might fit into this? (I don't see any obvious reason
they couldn't, but that is me.)

Thanks!
Andrew

On Fri, Oct 23, 2015 at 7:11 AM, Simon Peyton Jones 
wrote:

> It’s absolutely the case that bang patterns etc tell the caller what to
> do, but the function CANNOT ASSUME that its argument is evaluated.  Reason:
> higher order functions.
>
>
>
> I think that the way to allow functions that can assume their arg is
> evaluated is through types: see Type are calling conventions
> .
> But it’d be a fairly big deal to implement.
>
>
>
> Simon
>
>
>
>
>
> *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Ryan
> Newton
> *Sent:* 23 October 2015 14:54
> *To:* ghc-devs@haskell.org; Ömer Sinan Ağacan; Ryan Scott; Chao-Hong
> Chen; Johan Tibell
> *Subject:* Better calling conventions for strict functions (bang
> patterns)?
>
>
>
> Hi all,
>
>
>
> With module-level Strict and StrictData pragmas coming soon, one obvious
> question is what kind of the code quality GHC can achieve for strict
> programs.
>
>
>
> When it came up in discussion in our research group we realized we didn't
> actually know whether the bang patterns, `f !x`, on function arguments were
> enforced by caller or callee.
>
>
>
> Here's a Gist that shows the compilation of a trivial function:
>
> foo :: Maybe Int -> Int
>
> foo !x =
>
>   case x of
>
>Just y -> y
>
>
>
>*MailScanner has detected a possible fraud attempt from
> "na01.safelinks.protection.outlook.com" claiming to be*
> https://gist.github.com/rrnewton/1ac722189c65f26fe9ac
> 
>
>
>
> If that function is compiled to *assume* its input is in WHNF, it should
> be just as efficient as the isomorphic MLton/OCaml code, right?  It only
> needs to branch on the tag, do a field dereference, and return.
>
>
>
> But as you can see from the STG and CMM generated, foo *does indeed*
> enter the thunk, adding an extra indirect jump.  Here's the body:
>
>   c3aY:
>
>   if ((Sp + -8) < SpLim) goto c3aZ; else goto c3b0;
>
>   c3aZ:
>
>   // nop
>
>   R1 = PicBaseReg + foo_closure;
>
>   call (I64[BaseReg - 8])(R2, R1) args: 8, res: 0, upd: 8;
>
>   c3b0:
>
>   I64[Sp - 8] = PicBaseReg + block_c3aO_info;
>
>   R1 = R2;
>
>   Sp = Sp - 8;
>
>   if (R1 & 7 != 0) goto c3aO; else goto c3aP;
>
>   c3aP:
>
>   call (I64[R1])(R1) returns to c3aO, args: 8, res: 8, upd: 8;
>
>   c3aO:
>
>   if (R1 & 7 >= 2) goto c3aW; else goto c3aX;
>
>   c3aW:
>
>   R1 = P64[R1 + 6] & (-8);
>
>   Sp = Sp + 8;
>
>   call (I64[R1])(R1) args: 8, res: 0, upd: 8;
>
>   c3aX:
>
>   R1 = PicBaseReg + lvl_r39S_closure;
>
>   Sp = Sp + 8;
>
>   call (I64[R1])(R1) args: 8, res: 0, upd: 8;
>
>
>
>
>
> The call inside c3aP is entering "x" as a thunk, which also incurs all of
> the stack limit check code.  I believe that IF the input could be assumed
> to be in WHNF, everything above the label "c3aO" could be omitted.
>
>
>
> So... if GHC is going to be a fabulous pure *and* imperative language,
> and a fabulous lazy *and* strict compiler/runtime.. is there some work we
> can do here to improve this situation? Would the following make sense:
>
>- Put together a benchmark suite of all-strict programs with
>Strict/StrictData (compare a few benchmark's generated code to MLton, if
>time allows)
>- Modify GHC to change calling conventions for bang patterns -- caller
>enforces WHNF rather than callee.  Existing strictness/demand/cardinality
>analysis would stay the same.
>
> Unless there's something I'm really missing here, the result should be
> that you can have a whole chain of strict function calls, each of which
> knows its arguments and the arguments it passes to its callees are all in
> WHNF, without ever generating thunk-entry sequences.
>
>
>
> Thanks for your time,
>
>   -Ryan
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list

Re: HEADS UP: Need 7.10.3?

2015-09-16 Thread Andrew Farmer
As you mentioned, the two show stoppers for HERMIT are #10528
(specifically SPJs commit in comment:15 - see [1]) and #10829 (see
D1246). The first disables inlining/rule application in the LHS of
rules, the second does the same in the RHS. nofib results for the
latter are on the ticket.

I've set both to 7.10.3 milestone and high priority... thanks for merging them!

[1] bc4b64ca5b99bff6b3d5051b57cb2bc52bd4c841

On Mon, Sep 14, 2015 at 6:53 AM, Austin Seipp  wrote:
> Hi *,
>
> (This is an email primarily aimed at users reading this list and
> developers who have any interest).
>
> As some of you may know, there's currently a 7.10.3 milestone and
> status page on our wiki:
>
> https://ghc.haskell.org/trac/ghc/wiki/Status/GHC-7.10.3
>
> The basic summary is best captured on the above page:
>
> "We have not yet decided when, or even whether, to release GHC 7.10.3.
> We will do so if (but only if!) we have documented cases of
> "show-stoppers" in 7.10.2. Namely, cases from users where
>
>   - You are unable to use 7.10.2 because of some bug
>   - There is no reasonable workaround, so you are truly stuck
>   - We know how to fix it
>   - The fix is not too disruptive; i.e. does not risk introducing a
> raft of new bugs"
>
> That is, we're currently not fully sold on the need for a release.
> However, the milestone and issue page serve as a useful guide, and
> also make it easier to keep track of smaller, point-release worthy
> issues.
>
> So in the wake of the 8.0 roadmap I just sent: If you *need* 7.10.3
> because the 7.10.x series has a major regression or problem you can't
> work around, let us know!
>
>   - Find or file a bug in Trac
>   - Make sure it's highest priority
>   - Assign it to the 7.10.3 milestone
>   - Follow up on this email if possible, or edit it on the status page
> text above - it would be nice to get some public feedback in one place
> about what everyone needs.
>
> Currently we have two bugs on the listed page in the 'show stopper
> category', possibly the same bug, which is a deal-breaker for HERMIT I
> believe. Knowing of anything else would be very useful.
>
> Thanks all!
>
> --
> Regards,
>
> Austin Seipp, Haskell Consultant
> Well-Typed LLP, http://www.well-typed.com/
> ___
> Glasgow-haskell-users mailing list
> glasgow-haskell-us...@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/glasgow-haskell-users
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [Haskell] ETA on 7.10.3?

2015-09-02 Thread Andrew Farmer
Sorry, I dropped the ball on creating a ticket. I just did so:

https://ghc.haskell.org/trac/ghc/ticket/10829

(As an aside, the original ticket, #10528, had a milestone set as
7.10.3, so I just assumed a 7.10.3 was planned and coming soon.)

On Wed, Sep 2, 2015 at 7:43 AM, Simon Peyton Jones
 wrote:
> Ah, well  https://github.com/ku-fpg/hermit/issues/144#issuecomment-128762767
>
> links in turn to https://github.com/ku-fpg/hermit/issues/141, which is a
> long thread I can’t follow.
>
>
>
> Ryan, Andy: if 7.10.2 is unusable for you, for some reason, please make a
> ticket to explain why, and ask for 7.10.3.
>
>
> Simon
>
>
>
> From: Haskell [mailto:haskell-boun...@haskell.org] On Behalf Of David Banas
> Sent: 02 September 2015 13:19
> To: Ben Gamari
> Cc: hask...@haskell.org
> Subject: Re: [Haskell] ETA on 7.10.3?
>
>
>
> Hi Ben,
>
>
>
> Thanks for your reply.
>
>
>
> My problem is the project I’m currently working on is dependent upon HERMIT,
> which doesn’t play well with 7.10.2, as per:
>
>
>
> https://github.com/ku-fpg/hermit/issues/144#issuecomment-128762767
>
>
>
> (The nature of that comment caused me to think that 7.10.3 was in play.)
>
>
>
> Thanks,
>
> -db
>
>
>
> On Sep 2, 2015, at 12:05 AM, Ben Gamari  wrote:
>
>
>
> David Banas  writes:
>
>
> Hi,
>
> Does anyone have an ETA for ghc v7.10.3?
> (I'm trying to decide between waiting and backing up to 7.8.2, for a
> particular project.)
>
> Currently there are no plans to do a 7.10.3 release. 7.10.2 does has a
> few issues, but none of them are critical regressions but none of them
> appear critical enough to burn maintenance time on.
>
> Of course, we are willing to reevaluate in the event that new issues
> arise. What problems with 7.10.2 are you struggling with?
>
> Cheers,
>
> - Ben
>
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: question about GHC API on GHC plugin

2015-08-24 Thread Andrew Farmer
I'm not positive, but I believe each dictionary has a field for its
superclass dictionary. So if you have a dictionary for `Floating
Float`, one of the fields will be the `Num Float` dictionary.

How to get the projector function for the field... I'm not sure. But
perhaps you can find it by type?

On Mon, Aug 24, 2015 at 2:42 PM, Mike Izbicki m...@izbicki.me wrote:
 Thanks!  Now one more question :)

 The code Andrew Farmer showed me for getting dictionaries works great
 when I have a concrete type (e.g. Float) I want a dictionary for.  But
 now I'm working on polymorphic code and running into a problem.

 Lets say I'm running the plugin on a function with signature `Floating
 a = a - a`, then the plugin has access to the `Floating` dictionary
 for the type.  But if I want to add two numbers together, I need the
 `Num` dictionary.  I know I should have access to `Num` since it's a
 superclass of `Floating`.  How can I get access to these superclass
 dictionaries?

 On Sat, Aug 22, 2015 at 7:35 AM, Ömer Sinan Ağacan omeraga...@gmail.com 
 wrote:
 I have a new question: I'm working on supporting literals now.  I'm having
 trouble creating something that looks like `(App (Var F#) (Lit 1.0))` 
 because
 I don't know how to create a variable that corresponds to the `F#`
 constructor.  The mkWiredInName function looks promising, but overly
 complicated.  Is this the correct function?  If so, what do I pass in for 
 the
 Module, Unique, TyThing, and BuiltInSyntax parameters?

 mkConApp intDataCon [mkIntLit dynFlags PUT_YOUR_INTEGER HERE]
 mkConApp floatDataCon [mkFloatLit dynFlags PUT_YOUR_FLOAT_HERE]

 Similarly for other literals...
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: question about GHC API on GHC plugin

2015-08-20 Thread Andrew Farmer
The `buildDictionary` function takes a Var with a dictionary type, and
builds the expression which implements that dictionary.

For instance, you might create a new Var:

x :: Num Float

and pass that to buildDictionary. It will return:

(x, [NonRec x $fNumFloat])

which you could blindly turn into:

let x = $fNumFloat
in x

or you could do what buildDictionaryT (a bit further down in the same
module), and spot that case and just return $fNumFloat directly. (The
list can have more than one element in the case that dictionaries are
built in terms of other dictionaries.)

Thus, you've built a dictionary expression of type Num Float.

As I understand it, you want to pass something 'log' and get back the
dictionary argument. You'll need to choose a type (like Float), but
once that is done, it should be easy to use buildDictionary to build
the dictionary arguments... just take apart the type of 'log @ Float',
make a new Var with the argument type, build a dictionary expression,
and apply it.

On Thu, Aug 20, 2015 at 5:05 PM, Mike Izbicki m...@izbicki.me wrote:
 I'm pretty sure the `buildDictionary` function doesn't do what I need.
 AFAICT, you pass it a `Var` which contains a dictionary, and it tells
 you what is in that dictionary.  What I need is a function with type
 `Var - Var` where the first `Var` contains a function, and the output
 `Var` is the dictionary.

 For example, given the expression:

 log (a1+a2)

 In core, this might look like:

 log @ Float $fFloatingFloat (+ @ Float $fNumFloat a1 a2)

 I want to mechanically construct the core code above.  When doing so,
 each function within a type class has an extra argument, which is the
 dictionary for that type class.  `log` no longer takes one parameter;
 in core, it takes two.  I'm having trouble figuring out how to get the
 appropriate dictionary to pass as the dictionary parameter to these
 functions.

 On Mon, Aug 17, 2015 at 4:21 PM, Andrew Farmer afar...@ittc.ku.edu wrote:
 HERMIT has some code for building dictionaries for a given predicate
 type (by invoking the typechecker functions that do this):

 https://github.com/ku-fpg/hermit/blob/master/src/HERMIT/Dictionary/GHC.hs#L223

 The functions to run TcM computations inside CoreM are here:

 https://github.com/ku-fpg/hermit/blob/master/src/HERMIT/Monad.hs#L242
 and
 https://github.com/ku-fpg/hermit/blob/master/src/HERMIT/GHC/Typechecker.hs#L47

 Perhaps that will help get you started?

 I would like to push these interfaces back into the GHC API at some
 point, but just haven't done it yet.

 HTH
 Andrew

 On Mon, Aug 17, 2015 at 4:12 PM, Mike Izbicki m...@izbicki.me wrote:
 I'm not sure how either of those two functions can help me.  The
 problem is that given an operator (e.g. `+`), I don't know the name of
 the dictionary that needs to be passed in as the first argument to the
 operator.  I could probably hard code these names, but then the plugin
 wouldn't be able to work with alternative preludes.

 On Fri, Aug 7, 2015 at 11:20 PM, Edward Z. Yang ezy...@mit.edu wrote:
 Hello Mike,

 Give importDecl from LoadIface a try, or maybe tcLookupGlobal if
 you're in TcM.

 Edward

 Excerpts from Mike Izbicki's message of 2015-08-07 15:40:30 -0700:
 I'm trying to write a GHC plugin.  The purpose of the plugin is to
 provide Haskell bindings to Herbie. Herbie
 (https://github.com/uwplse/herbie) is a program that takes a
 mathematical statement as input, and gives you a numerically stable
 formula to compute it as output.  The plugin is supposed to automate
 this process for Haskell programs.

 I can convert the core expressions into a format for Herbie just fine.
 Where I'm having trouble is converting the output from Herbie back
 into core.  Given a string that represents a numeric operator (e.g.
 log or +), I can get that converted into a Name that matches the
 Name of the version of that operator in scope at the location.  But in
 order to create an Expr, I need to convert the Name into a Var.  All
 the functions that I can find for this (e.g. mkGlobalVar) also require
 the type of the variable.  But I can't find a way to figure out the
 Type given a Name.  How can I do this?
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: question about GHC API on GHC plugin

2015-08-17 Thread Andrew Farmer
HERMIT has some code for building dictionaries for a given predicate
type (by invoking the typechecker functions that do this):

https://github.com/ku-fpg/hermit/blob/master/src/HERMIT/Dictionary/GHC.hs#L223

The functions to run TcM computations inside CoreM are here:

https://github.com/ku-fpg/hermit/blob/master/src/HERMIT/Monad.hs#L242
and
https://github.com/ku-fpg/hermit/blob/master/src/HERMIT/GHC/Typechecker.hs#L47

Perhaps that will help get you started?

I would like to push these interfaces back into the GHC API at some
point, but just haven't done it yet.

HTH
Andrew

On Mon, Aug 17, 2015 at 4:12 PM, Mike Izbicki m...@izbicki.me wrote:
 I'm not sure how either of those two functions can help me.  The
 problem is that given an operator (e.g. `+`), I don't know the name of
 the dictionary that needs to be passed in as the first argument to the
 operator.  I could probably hard code these names, but then the plugin
 wouldn't be able to work with alternative preludes.

 On Fri, Aug 7, 2015 at 11:20 PM, Edward Z. Yang ezy...@mit.edu wrote:
 Hello Mike,

 Give importDecl from LoadIface a try, or maybe tcLookupGlobal if
 you're in TcM.

 Edward

 Excerpts from Mike Izbicki's message of 2015-08-07 15:40:30 -0700:
 I'm trying to write a GHC plugin.  The purpose of the plugin is to
 provide Haskell bindings to Herbie. Herbie
 (https://github.com/uwplse/herbie) is a program that takes a
 mathematical statement as input, and gives you a numerically stable
 formula to compute it as output.  The plugin is supposed to automate
 this process for Haskell programs.

 I can convert the core expressions into a format for Herbie just fine.
 Where I'm having trouble is converting the output from Herbie back
 into core.  Given a string that represents a numeric operator (e.g.
 log or +), I can get that converted into a Name that matches the
 Name of the version of that operator in scope at the location.  But in
 order to create an Expr, I need to convert the Name into a Var.  All
 the functions that I can find for this (e.g. mkGlobalVar) also require
 the type of the variable.  But I can't find a way to figure out the
 Type given a Name.  How can I do this?
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


OS X bindist

2015-04-22 Thread Andrew Farmer
Forgive me if this has been answered somewhere, but is there an
officially sanctioned bindist of 7.10.1 for OS X yet? The page I
usually visit [1] doesn't list one yet.

Thanks!
Andrew

[1] https://www.haskell.org/ghc/download_ghc_7_10_1
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: [haskell-infrastructure] wither the Platform

2015-03-23 Thread Andrew Farmer
I think the fact that we now have these minimal installers floating
around is evidence that there is demand for that.

I personally just download the latest bindist from the GHC site and
bootstrap cabal myself. Partly this is because my work requires me to
have the latest GHC, so maybe I'm not in the HP's target demographic.
That said, I would love if there was a platform that just did that (+
whatever is needed to get that to work). Maybe at the end of the
minimal install, give me a choice between stackage and hackage and set
the remote-repo in my cabal file appropriately.

I was excited by all Mark's (and other's) recent work on the platform,
because it sounded like the new build system would allow it to track
GHC much more closely. To me, the value of the HP (and why I still
recommend it to people) was always that it is a quick way to get ghc +
cabal on your system. Have a curated set of recommended packages was
neither here nor there (until they get way out of date, as Michael
pointed out with aeson 0.6.2.1). Finding good libraries seems like a
problem that google + hackage 2 + stackage solves well nowdays.

On Mon, Mar 23, 2015 at 12:19 PM, Gershom B gersh...@gmail.com wrote:
 On Mon, Mar 23, 2015 at 11:20 AM, Anthony Cowley acow...@gmail.com wrote:

 I don't understand this attitude. You say that neither new users nor package
 authors agree with your stance on the HP, so you want to force their hands.
 Presumably package authors know what they're doing for themselves, and the
 majority of evidence we have is that new users who stick with the language
 do not like the way the HP works
 https://reddit.com/r/haskell/comments/2zts44/wither_the_platform/.

 No, this is not what I said. I was explaining that it rather
 difficult, even if we wanted to force their hands, to imagine that we
 could do so. I will say that I do not share your belief that package
 authors know what they are doing, in general. If they did, I would
 imagine that nearly all packages would ensure they would work with the
 current platform. But getting programmers to all agree on something
 like that is herding cats, and all it takes is a few people saying
 feh on the platform but nonetheless producing otherwise good
 packages that come into widespread use to _in general_ mean that many
 packages which directly or indirectly want to use those dependencies
 must also now split off from support for the platform.

 So I agree that people have voted with their feet, and we need to
 catch up to that.

 In fact, it seems to me from this discussion that there are only _two_
 things we need to do to make the platform the recommended install path
 for all platforms again:

 1) Incorporate the improvements to windows builds that have been
 pioneered by the MinGHC project (in particular so that
 platform-installed windows ghc can build the network package properly,
 and perhaps even the GL stuff).

 2) Address the problem that in a sandbox you will get a different
 install plan depending on your global package db. I would suggest this
 be done by setting a default preference of the sandbox to ignore
 global packages, and allow that to be overridden with an easily
 configurable flag or setting or the like. That way, new users can pay
 the longer compile price for a guaranteed-safer experience, and more
 experienced users can choose to use / build up a broader library of
 packages they can share across sandboxes.

 (Certainly some nix-like tricks in cabal could also help address the
 repeated builds problem, but a flag can be implemented much more
 quickly as a short-term measure).

 All that said, I think it would _still_ be better if more package
 authors paid attention to backwards-compatibility across releases, and
 it would _still_ be better for new users if they didn't go off and
 bash their heads against the newest and least stable packages around.
 But that is indeed a different problem.

 Cheers,
 Gershom
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


GHC Haddock URL

2015-02-12 Thread Andrew Farmer
Can someone with The Power please symlink (or otherwise redirect) this:

https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc

to

https://downloads.haskell.org/~ghc/latest/docs/html/libraries/ghc-7.8.4

... so I can Hoogle with greater ease?

Thanks much!
Andrew
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Wiki: special namespace for proposals?

2014-10-15 Thread Andrew Farmer
Instead of encoding the status in the URL, since we don't want URLs to
change with the status of the proposal/feature changes, it sounds like
we really just want something better than TitleIndex for browsing the
wiki. (I've never seen a Trac wiki where TitleIndex is that useful
anyway, other than to ctrl-f stuff.)

I'm not really familiar with what Trac Wiki is capable of. Is it
possible to add tags/categories to a page and then make an
auto-generated list of links to all pages with a given tag/category?
Then local edits to a given design (changing the category from
'proposed' to 'implemented') would automatically move it around as
appropriate.

I think it would be helpful to have a single place from which we could
browse current/past proposals. If for no other reason than to get an
idea how to write one myself.

On Wed, Oct 15, 2014 at 4:14 PM, Joachim Breitner
m...@joachim-breitner.de wrote:
 Hi,


 Am Mittwoch, den 15.10.2014, 11:06 +0200 schrieb Jan Stolarek:
 I'm all for improving organization of the wiki but I'm not sure about this 
 idea. What happens when
 a proposal gets implemented? You can't just move the page to a new address. 
 You can create a new
 wiki page describing the final dsign that was implemented and replace the 
 content of the proposal
 page with a redirection. But that menas more mess in the wiki namespace.

 I think proposal and design pages are (or at least, could be) different
 things. In a Proposal, there are alternatives, there are little details,
 there are notes about dead end, possibly benchmarks or such justifying a
 choice.

 Once something is implemented, most of that is not immediately
 interesting to someone trying to understand the final design (i.e. to
 fix a bug). So a good design page would have a structure anyway. And we
 already have a namespace for that: Commentary!

 So when a Proposal gets implemented, this should be clearly noted at the
 top of the Proposal page, linking to the relevant Comentary page (or
 paper, if there is one, or Note in the code, if the final design is so
 simple that it fits that format). The discussion about the Proposal
 would still be there for those who need to do some historical digging,
 i.e. when someone suggest a new implementation and we need to check if
 that variant was considered in the original implementation.

 Greetings,
 Joachm

 --
 Joachim Breitner
   e-Mail: m...@joachim-breitner.de
   Homepage: http://www.joachim-breitner.de
   Jabber-ID: nome...@joachim-breitner.de


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: D202: Injective type families

2014-09-18 Thread Andrew Farmer
I believe you do this with: arc patch D123456

The arcanist docs are handy:
https://secure.phabricator.com/book/phabricator/article/arcanist/

On Thu, Sep 18, 2014 at 9:38 AM, Simon Peyton Jones
simo...@microsoft.com wrote:
 Dear GHC devs

 I'm sure I should know this, but if I want to build a Phab patch, to 
 reproduce some issue (example below). How would I do that?

 If Phabs were branches in the GHC repo I could say
 Git checkout phab/D202

 That would be cool.  I know how to do that.

 But I don't think they are.  So what do I do?

 Simon

 |  -Original Message-
 |  From: nore...@phabricator.haskell.org
 |  [mailto:nore...@phabricator.haskell.org]
 |  Sent: 18 September 2014 09:35
 |  To: Simon Peyton Jones
 |  Subject: [Differential] [Commented On] D202: Injective type families
 |
 |  jstolarek added a comment.
 |
 |  I made two important adjustments:
 |
 |  - first of all I removed the `result` from the parser and lexer.
 |  Instead of `result` I'm planning to use type variable introduced by
 |  the user for the result (as described in [[
 |  https://ghc.haskell.org/trac/ghc/wiki/InjectiveTypeFamilies#Proposal7
 |  | Proposal 7 ]] on the wiki.
 |
 |  - following Richard's suggestion I changed `InjectivityInfo` to
 |  contain `[Located name]` instead of `[LHsType name]`. But I keep
 |  getting Not in scope errors. When I try to compile:
 |  ```
 |  type family F a :: r | r - a where
 |  F a = a
 |  ```
 |  I get errors that `r` and `a` are not in scope. According to my
 |  tracing line 1149 of RnSource.lhs is responsible for this. I looked at
 |  other code in this module and it seems to me that `bindHsTyVars`
 |  should bring these variables into scope when renaming `r - a`. I have
 |  no idea why this does not happen. Simon, help?
 |
 |  Also, build error reported by Harbormaster is nonsense.
 |
 |  REPOSITORY
 |rGHC Glasgow Haskell Compiler
 |
 |  REVISION DETAIL
 |https://phabricator.haskell.org/D202
 |
 |  REPLY HANDLER ACTIONS
 |Reply to comment, or !reject, !abandon, !reclaim, !resign, !rethink,
 |  !unsubscribe.
 |
 |  To: jstolarek, simonpj, austin
 |  Cc: goldfire, simonmar, ezyang, carter
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: RFC: style cleanup guidelines for GHC, and related bikeshedding

2014-07-03 Thread Andrew Farmer
On Thu, Jul 3, 2014 at 4:13 AM, Joachim Breitner
m...@joachim-breitner.de wrote:

 On the other hand, having a “detab and rename” horizon where merging patches 
 from
 before is much harder, and where git log -L and git blame fail to
 work properly would be a hindrance.

Minor point, but you can use git blame -w to tell blame to ignore
whitespace changes and show you the last commit that actually changed
the code.
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Proposal: require Haddock comment for every new top-level function and type in GHC source code

2014-07-01 Thread Andrew Farmer
At the risk of sounding redundantly redundant, I'd like to third this.

My workflow for finding stuff in the GHC codebase is a mixture of grep
and Hoogle. Searching Hoogle for +ghc :: [TyVar] - Type - Type is
a huge timesaver, and Hoogle sends me to the generated haddock
comments. Usually the haddocks themselves aren't there, but the
Source link is a handy way to jump to the code and explore.

So having actual haddock documentation would only help in this regard.
The Notes are *great* for subtle issues with the implementation of
some function, but it'd be nice to have some commentary on how to
_use_ that function without having to understand how it works.

On Mon, Jun 30, 2014 at 3:42 PM, Ben Gamari bgamari.f...@gmail.com wrote:
 David Luposchainsky dluposchain...@googlemail.com writes:

 Hey list,

 I am strongly in favour of the proposal. As a pedestrian-level GHC
 contributor, the *vast* majority of my time is spent trying to figure
 out what certain things do, when the answer could be found in a one-
 or two-line comment above a definition.

 I'd like to second this. As an occassional contributor, I find myself
 wading through a lot of code to deduce functions' purpose. While I'm
 often pleasantly surprised by the quality of the notes scattered about
 the code, per-definition Haddocks would fill in the many remaining gaps
 and provide a nice overview of each module.

 I agree that enforcing the quality of the rendered Haddocks is
 unnecessary. Once the language has been written there are many
 contributors (such as myself) who can further clean up the formatting.

 Cheers,

 - Ben


 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: C-- specfication

2014-05-04 Thread Andrew Farmer
Are all of these links collected on the GHC wiki somewhere? If not,
would you mind adding them?

I, for one, appreciate a curated list of references like this!

On Sat, May 3, 2014 at 5:33 AM, Arash Rouhani
rar...@student.chalmers.se wrote:
 (Sorry Florian, I forgot to reply to list!)

 Hi Florian!

 Forget Cminusminus.org, in my experience it seems to have diverged from the
 GHC version of Cminusminus.

 I would recommend these resources

 See the top of
 https://github.com/ghc/ghc/blob/master/compiler/cmm/CmmParse.y
 Be ready to occasionally look into
 https://github.com/ghc/ghc/blob/master/includes/Cmm.h
 Edward Yang's blog post is a must-read
 http://blog.ezyang.com/2013/07/no-grammar-no-problem/ (less than a year old)
 You can also get the big picture of Cmm from David Terei's bachelor thesis:
 https://davidterei.com/downloads/papers/terei:2009:honours_thesis.pdf
 2 years ago, Simon Marlow extended the classical Cmm syntax to make it much
 nicer:
 https://github.com/ghc/ghc/commit/a7c0387d20c1c9994d1100b14fbb8fb4e28a259e
 The commentary (it is kinda outdated in my experience, but worth taking a
 look :)), https://ghc.haskell.org/trac/ghc/wiki/Commentary/Rts/Cmm and
 https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/CmmType
 Read the code! There's a lot of Cmm files and after looking at various parts
 of it for a while parts start to make sense :)
 Shameless plug: You might find sections 4.2 and 4.2.1 from my master thesis
 helpful to understand the difference between arguments and fields.
 http://arashrouhani.com/papers/master-thesis.pdf

 And it will take time to learn Cmm. The most unintuitive thing for me that
 took me a while to understand is that there are no function calls in
 classical cmm code. The newer syntax allows function calls but you should
 know that they are kind of magical. Hope this helps! :)

 (Sorry for giving so many reading references :p)

 Cheers,
 Arash



 On 2014-05-03 12:05, Florian Weimer wrote:

 I'm looking for a specification of C--.  I can't find it on the
 cminuscminus.org web site, and it's also not included in the release
 tarball.  Does anybody know where to get it?
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs



 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Relocating (some of) GHC's core-libraries to github.com/haskell

2014-04-29 Thread Andrew Farmer
+1

On Tue, Apr 29, 2014 at 9:46 AM, Edward Kmett ekm...@gmail.com wrote:
 I would really like that as well.

 My experience is it is rather easy to get users to put together a pull
 request through github.

 It is rather more like pulling teeth to get them to use git properly and put
 together a traditional patch.

 This would greatly open up the workflow for end users contributing things
 like small documentation fixes and the like.

 -Edward


 On Tue, Apr 29, 2014 at 5:58 AM, Herbert Valerio Riedel hvrie...@gmail.com
 wrote:

 Hello Simon,

 On 2014-04-28 at 11:28:35 +0200, Simon Marlow wrote:

 [...]

  However, we can configure the lagged mirror such that we'd
  automatically
  mirror github's 'master' branch into our lagged mirror (we'd still be
  free to create local wip/* or ghc-7.10 branches at git.haskell.org if
  needed)
 
  I think that's fine.  As Simon points out, we already have lagging
  repo functionality in the form of the submodule links, so the repo on
  git.haskell.org can be a pure mirror.

 Just so I get this right, does pure mirror here mean that we don't
 want users to be able to push to the automatically mirrored repo on
 git.haskell.org at all, but rather the only way to get any commits into
 the git.haskell.org mirrored repo would be push it via the GitHub repo?

 (I'd like that, as it would make the set-up easier and hopefully less
 confusing, as there'd be only a single data-flow path)

 Cheers,
   hvr



 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Trac seems to think I'm a spambot...?

2014-04-08 Thread Andrew Farmer
That moment when spammers start pushing language research forward by
generating functions from natural language specifications.


On Tue, Apr 8, 2014 at 1:08 PM, Artyom Kazak y...@artyom.me wrote:

 Look what I've found: http://codecha.org/ . It might be an easy solution
 to the Haskell-specific CAPTCHA problem.

 On 04/07/2014 06:53 PM, Daniel Trstenjak wrote:

 On Mon, Apr 07, 2014 at 09:27:32PM +0700, Kim-Ee Yeoh wrote:

 What if we replace captcha with a short, static question, the web form
 equivalent of a secret handshake? And give it enough weighting to
 override
 akismet?

 E.g.

 * What is Haskell's middle name?
 * What is SPJ's middle name?


 Yeah, I thought about something similar like: what's the result of 'map
 (+1) [1,2]'.

  The main drawback to this is that it'll only be a matter of time before
 spammers wise up. But that interval might be long enough for something
 better
 on the horizon, e.g. akismet gets a lot smarter, better blog posts on
 tracspam,
 etc.


 I don't think that the ghc wiki is of particular interest for spammers
 or that they gain a lot by understanding Haskell specifics. Most likely
 they will never notice it.


 Greetings,
 Daniel
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

  ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Summarize #7602

2014-02-04 Thread Andrew Farmer
I know that #7602 has been mentioned as a final loose end in the RC. I was
wondering if someone in the know could summarize its status in regards to
how it effects GHC users.

Feel free to be blisteringly brief... ;-) I'm just looking for things like:
this is a non-issue on modern OS X/Clang or it'll be fixed soon just be
patient or we've done our part, this is Apple's problem now.

I gave the trac thread a read-through, but there seems to be several
degrees of flexibility/workarounds, so I'm still not sure if this is
something I should worry about on my OS X machine.

Thanks for your cycles!
Andrew
___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Dynamically loading and unloading (C) object files

2013-11-01 Thread Andrew Farmer
The published version of that paper in the ACM digital library...

http://dl.acm.org/citation.cfm?id=1017478


On Thu, Oct 31, 2013 at 5:10 PM, Edward Z. Yang ezy...@mit.edu wrote:

  So that leads me to wonder: are there limitations that we should be
  aware of? Have I simply been lucky so far?

 If you are loading Haskell code, you will need to be very careful to
 make sure you load all of the dependencies as well.  There are a number
 of plugins packages and a paper:
 http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.9.7627

 GHC API changes a lot, so it's easy for these packages to get bitrotted.

 The upcoming release of GHC will run constructors (link-time
 initializers), but will not run destructors.

 There is only partial support for weak symbols.
 http://ghc.haskell.org/trac/ghc/ticket/

 Edward
 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: Is evacuate for StgMutArrPtrs and StgArrPtrs expensive to GC?

2013-10-01 Thread Andrew Farmer
I did indeed implement dynamic nursery sizing and did some preliminary
benchmarking. The headline figure: 15% speedup on the nofib/gc benchmarks,
though the variance was pretty large, and there were some slowdowns.

My scheme was very simple... I kept track of the size and rough collection
time of the previous three collections and did a sort of crude binary
search to find a minimum in the search space. I did it this way because it
was simple and required constant time and memory to make a decision. Though
one of the conclusions was that collection time was a bad metric, due to
the way the RTS re-uses blocks. As Simon pointed out, tracking retainment
or some other metric would probably be better, but I need to explore it.
Another result: the default size is almost always too small (at least for
the nofib programs). CPUs come with huge caches, and using the RTS flag -A
to set the allocation area to be roughly the size of the L3 cache usually
gave pretty decent speedups.

I did this for a class project, and had to put it down to focus on other
things, and just haven't picked it back up. I still have a patch laying
around, and several pages of notes with ideas for improvement in both the
metric and search. I'm hoping to pick it back up again in a couple months,
with an eye on a workshop paper, and a real patch for 7.10.


On Tue, Oct 1, 2013 at 3:36 AM, Simon Marlow marlo...@gmail.com wrote:

 It's typical for benchmarks that allocate a large data structure to spend
 a lot of time in the GC.  The data gets copied twice - once in the young
 generation and then again when promoted to the old generation.  You can
 make this kind of benchmark much faster by just using a bigger allocation
 area.

 There's nothing inherently costly about StgMutArrPtrs compared to other
 objects, except that they are variable size and therefore we can't unroll
 the copy loop, but I don't think that's a big effect.  The actual copying
 is the major cost.

 The way to improve this kind of benchmark would be to add some heuristics
 for varying the nursery size based on the quantity of data retained, for
 example.  I think there's a lot of room for improvement here, but someone
 needs to do some careful benchmarking and experimentation. Andrew Farmer
 did some work on this and allegedly got good results but we never saw the
 code (hint hint!).

 Cheers,
 Simon


 On 1 October 2013 06:43, Johan Tibell johan.tib...@gmail.com wrote:

 The code for 'allocate' in rts/sm/Storage.c doesn't seem that
 expensive. An extra branch compared to inline allocation and
 allocation is done in the next nursery block (risking fragmentation?).

 -- Johan

 On Mon, Sep 30, 2013 at 9:50 PM, Johan Tibell johan.tib...@gmail.com
 wrote:
  Hi,
 
  When I benchmark Data.HashMap.insert from unordered-containers
  (inserting the keys [0..1]) the runtime is dominated by GC:
 
  $ cat Test.hs
  module Main where
 
  import   Control.DeepSeq
  import   Control.Exception
  import   Control.Monad
  import qualified Data.HashMap.Strict as HM
  import   Data.List (foldl')
 
  main = do
  let ks = [0..1] :: [Int]
  evaluate (rnf ks)
  forM_ ([0..1000] :: [Int]) $ \ x - do
  evaluate $ HM.null $ foldl' (\ m k - HM.insert k x m) HM.empty
 ks
 
  $ perf record -g ./Test +RTS -s
 6,187,678,112 bytes allocated in the heap
 3,309,887,128 bytes copied during GC
 1,299,200 bytes maximum residency (1002 sample(s))
   118,816 bytes maximum slop
 5 MB total memory in use (0 MB lost due to fragmentation)
 
  Tot time (elapsed)  Avg pause  Max
 pause
Gen  0 11089 colls, 0 par1.31s1.30s 0.0001s
  0.0005s
Gen  1  1002 colls, 0 par0.49s0.51s 0.0005s
  0.0022s
 
INITtime0.00s  (  0.00s elapsed)
MUT time1.02s  (  1.03s elapsed)
GC  time1.80s  (  1.80s elapsed)
EXITtime0.00s  (  0.00s elapsed)
Total   time2.82s  (  2.84s elapsed)
 
%GC time  63.7%  (63.5% elapsed)
 
Alloc rate6,042,264,963 bytes per MUT second
 
Productivity  36.3% of total user, 36.1% of total elapsed
 
  $ perf report
  41.46%  Test  Test   [.] evacuate
  15.47%  Test  Test   [.] scavenge_block
  11.04%  Test  Test   [.] s3cN_info
   8.74%  Test  Test   [.] s3aZ_info
   3.59%  Test  Test   [.] 0x7ff5
   2.83%  Test  Test   [.] scavenge_mut_arr_ptrs
   2.69%  Test  libc-2.15.so   [.] 0x147fd9
   2.51%  Test  Test   [.] allocate
   2.00%  Test  Test   [.] s3oo_info
   0.91%  Test  Test   [.] todo_block_full
   0.87%  Test  Test   [.] hs_popcnt64
   0.80%  Test  Test   [.] s3en_info
   0.62%  Test  Test   [.] s3el_info
 
  Is GC:ing StgMutArrPtrs and StgArrPtrs, which I create a lot of, more
  expensive than GC:ing

Re: Is evacuate for StgMutArrPtrs and StgArrPtrs expensive to GC?

2013-10-01 Thread Andrew Farmer
Definitely... I'm somewhat fully occupied for the next two weeks, but
should be able to dig it out then and organize/share it.
On Oct 1, 2013 3:50 PM, Carter Schonwald carter.schonw...@gmail.com
wrote:

 awesome!

 please let us know when some of the info is available publicly, perhaps so
 other folks can help out wiht experimentation


 On Tue, Oct 1, 2013 at 4:30 PM, Andrew Farmer afar...@ittc.ku.edu wrote:

 I did indeed implement dynamic nursery sizing and did some preliminary
 benchmarking. The headline figure: 15% speedup on the nofib/gc benchmarks,
 though the variance was pretty large, and there were some slowdowns.

 My scheme was very simple... I kept track of the size and rough
 collection time of the previous three collections and did a sort of crude
 binary search to find a minimum in the search space. I did it this way
 because it was simple and required constant time and memory to make a
 decision. Though one of the conclusions was that collection time was a bad
 metric, due to the way the RTS re-uses blocks. As Simon pointed out,
 tracking retainment or some other metric would probably be better, but I
 need to explore it. Another result: the default size is almost always too
 small (at least for the nofib programs). CPUs come with huge caches, and
 using the RTS flag -A to set the allocation area to be roughly the size of
 the L3 cache usually gave pretty decent speedups.

 I did this for a class project, and had to put it down to focus on other
 things, and just haven't picked it back up. I still have a patch laying
 around, and several pages of notes with ideas for improvement in both the
 metric and search. I'm hoping to pick it back up again in a couple months,
 with an eye on a workshop paper, and a real patch for 7.10.


 On Tue, Oct 1, 2013 at 3:36 AM, Simon Marlow marlo...@gmail.com wrote:

 It's typical for benchmarks that allocate a large data structure to
 spend a lot of time in the GC.  The data gets copied twice - once in the
 young generation and then again when promoted to the old generation.  You
 can make this kind of benchmark much faster by just using a bigger
 allocation area.

 There's nothing inherently costly about StgMutArrPtrs compared to other
 objects, except that they are variable size and therefore we can't unroll
 the copy loop, but I don't think that's a big effect.  The actual copying
 is the major cost.

 The way to improve this kind of benchmark would be to add some
 heuristics for varying the nursery size based on the quantity of data
 retained, for example.  I think there's a lot of room for improvement here,
 but someone needs to do some careful benchmarking and experimentation.
 Andrew Farmer did some work on this and allegedly got good results but we
 never saw the code (hint hint!).

 Cheers,
 Simon


 On 1 October 2013 06:43, Johan Tibell johan.tib...@gmail.com wrote:

 The code for 'allocate' in rts/sm/Storage.c doesn't seem that
 expensive. An extra branch compared to inline allocation and
 allocation is done in the next nursery block (risking fragmentation?).

 -- Johan

 On Mon, Sep 30, 2013 at 9:50 PM, Johan Tibell johan.tib...@gmail.com
 wrote:
  Hi,
 
  When I benchmark Data.HashMap.insert from unordered-containers
  (inserting the keys [0..1]) the runtime is dominated by GC:
 
  $ cat Test.hs
  module Main where
 
  import   Control.DeepSeq
  import   Control.Exception
  import   Control.Monad
  import qualified Data.HashMap.Strict as HM
  import   Data.List (foldl')
 
  main = do
  let ks = [0..1] :: [Int]
  evaluate (rnf ks)
  forM_ ([0..1000] :: [Int]) $ \ x - do
  evaluate $ HM.null $ foldl' (\ m k - HM.insert k x m)
 HM.empty ks
 
  $ perf record -g ./Test +RTS -s
 6,187,678,112 bytes allocated in the heap
 3,309,887,128 bytes copied during GC
 1,299,200 bytes maximum residency (1002 sample(s))
   118,816 bytes maximum slop
 5 MB total memory in use (0 MB lost due to
 fragmentation)
 
  Tot time (elapsed)  Avg pause
  Max pause
Gen  0 11089 colls, 0 par1.31s1.30s 0.0001s
  0.0005s
Gen  1  1002 colls, 0 par0.49s0.51s 0.0005s
  0.0022s
 
INITtime0.00s  (  0.00s elapsed)
MUT time1.02s  (  1.03s elapsed)
GC  time1.80s  (  1.80s elapsed)
EXITtime0.00s  (  0.00s elapsed)
Total   time2.82s  (  2.84s elapsed)
 
%GC time  63.7%  (63.5% elapsed)
 
Alloc rate6,042,264,963 bytes per MUT second
 
Productivity  36.3% of total user, 36.1% of total elapsed
 
  $ perf report
  41.46%  Test  Test   [.] evacuate
  15.47%  Test  Test   [.] scavenge_block
  11.04%  Test  Test   [.] s3cN_info
   8.74%  Test  Test   [.] s3aZ_info
   3.59%  Test  Test   [.] 0x7ff5
   2.83%  Test  Test   [.] scavenge_mut_arr_ptrs
   2.69%  Test

Re: defunctionalization

2013-07-18 Thread Andrew Farmer
What happens when you put NOINLINE on the function and compile with
-fexpose-all-unfoldings? Does that get the behavior you want?


On Thu, Jul 18, 2013 at 2:20 AM, Simon Peyton-Jones
simo...@microsoft.comwrote:

  It seems a little weird, but the internal data types can express it, so
 if you can make the front end do the right thing I’d be happy to take it.
 (Don’t forget the manual.)

 ** **

 SImon

 ** **

 *From:* ghc-devs [mailto:ghc-devs-boun...@haskell.org] *On Behalf Of *Nicolas
 Frisby
 *Sent:* 16 July 2013 21:29
 *To:* ghc-devs@haskell.org
 *Subject:* Re: defunctionalization

 ** **

 Ah, I misread that TidyPgm function.It looks like if I build the
 CoreUnfolding, GHC will respect it. It's just rejecting the pragma
 combination in HsSyn.

 On Jul 16, 2013 3:22 PM, Nicolas Frisby nicolas.fri...@gmail.com
 wrote:
 
  I'd like to put a NOINLINE and an INLINABLE pragma on a binding.
 
  (I'm sketching a defunctionalization pass. I'd like the 'apply` routine
 RHS to make it into the interface file, but I do not want it to be inlined,
 since that'd undo the defunctionalization.)
 
  In other words, I'd like a CoreUnfolding value with the uf_guidance =
 UnfNever.
 
  It seems TidyPgm.addExternal ignores such a core unfolding.
 
  Would GHC consider a patch to make this work?
 
  Thanks.

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs


___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs


Re: GHC plugin without registering the plugin

2013-05-05 Thread Andrew Farmer
I've also been trying to find a good way to do this. I had a somewhat hacky
driver that would use GHC to compile the plugin in a temporary directory,
then create a custom package database with only that plugin and add that
database to the stack of package databases when compiling the target
program. The annoying part was creating the package description, as you
need to determine which other packages the plugin depends on. However, it
just occurred to me that ghc --make does this, so its probably a matter
of finding the proper GHC API call to get the list of packages used to
build the plugin and converting that into a package description.

Can anyone familiar with either the plugin API or Cabal/ghc-pkg comment on
whether there is an easier way?

Thanks!
Andrew


On Sat, May 4, 2013 at 8:07 PM, Erik de Castro Lopo mle...@mega-nerd.comwrote:

 Hi all,

 I'm messing about with writing a GHC plugin and looking at the
 documentation here:


 http://www.haskell.org/ghc/docs/7.6.3/html/users_guide/compiler-plugins.html

 which states:

 Plugins can be specified on the command line with the option
 -fplugin=module where module is a module in a registered package
 that exports a plugin.

 Is there anyway I can have a plugin in my local source tree rather than
 being installed and registered? Not having to cabal install it would make
 development and debugging somewhat easier.

 Cheers,
 Erik
 --
 --
 Erik de Castro Lopo
 http://www.mega-nerd.com/

 ___
 ghc-devs mailing list
 ghc-devs@haskell.org
 http://www.haskell.org/mailman/listinfo/ghc-devs

___
ghc-devs mailing list
ghc-devs@haskell.org
http://www.haskell.org/mailman/listinfo/ghc-devs