Re: Register Allocator Tests

2016-10-11 Thread Thomas Jakway
Ah okay, thanks!

On Oct 11, 2016 8:25 PM, "Ömer Sinan Ağacan"  wrote:

> > Is it not possible to unit test GHC?
>
> You need to export functions you want to test, and then write a program
> that
> tests those functions using the `ghc` package.
>
> See
> https://github.com/ghc/ghc/blob/master/testsuite/tests/
> unboxedsums/unboxedsums_unit_tests.hs
> for an example.
>
> 2016-10-11 17:50 GMT-04:00 Thomas Jakway :
>
>> I read somewhere that fixing the graph register allocator would be a good
>> project so I thought I'd look into it. I couldn't find any tickets about it
>> on Trac though so I was poking around for tests to see what (if anything)
>> was wrong with it.
>>
>> After I sent that last email I googled around for how to write ghc unit
>> tests and this
>> 
>> is the only thing I found.  Is it not possible to unit test GHC?  If not
>> are there plans/discussions about this?  I think it'd help document the
>> code base if nothing else and it'd be a good way to get my feet wet.
>> On 10/11/2016 02:13 PM, Ben Gamari wrote:
>>
>> Thomas Jakway   writes:
>>
>>
>> Can anyone point me to the register allocator tests (especially for the
>> graph register allocator)?  Can't seem to find them and grepping doesn't
>> turn up much (pretty much just
>> testsuite/tests/codeGen/should_run/cgrun028.h).
>>
>>
>> What sort of tests are you looking for in particular? I'm afraid all we
>> have are regression tests covering the code generator as a whole.
>>
>> Cheers,
>>
>> - Ben
>>
>>
>>
>> ___
>> ghc-devs mailing list
>> ghc-devs@haskell.org
>> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>
>>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Register Allocator Tests

2016-10-11 Thread Ömer Sinan Ağacan
> Is it not possible to unit test GHC?

You need to export functions you want to test, and then write a program that
tests those functions using the `ghc` package.

See
https://github.com/ghc/ghc/blob/master/testsuite/tests/unboxedsums/unboxedsums_unit_tests.hs
for an example.

2016-10-11 17:50 GMT-04:00 Thomas Jakway :

> I read somewhere that fixing the graph register allocator would be a good
> project so I thought I'd look into it. I couldn't find any tickets about it
> on Trac though so I was poking around for tests to see what (if anything)
> was wrong with it.
>
> After I sent that last email I googled around for how to write ghc unit
> tests and this
>  is
> the only thing I found.  Is it not possible to unit test GHC?  If not are
> there plans/discussions about this?  I think it'd help document the code
> base if nothing else and it'd be a good way to get my feet wet.
> On 10/11/2016 02:13 PM, Ben Gamari wrote:
>
> Thomas Jakway   writes:
>
>
> Can anyone point me to the register allocator tests (especially for the
> graph register allocator)?  Can't seem to find them and grepping doesn't
> turn up much (pretty much just
> testsuite/tests/codeGen/should_run/cgrun028.h).
>
>
> What sort of tests are you looking for in particular? I'm afraid all we
> have are regression tests covering the code generator as a whole.
>
> Cheers,
>
> - Ben
>
>
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Register Allocator Tests

2016-10-11 Thread Thomas Jakway
I read somewhere that fixing the graph register allocator would be a 
good project so I thought I'd look into it. I couldn't find any tickets 
about it on Trac though so I was poking around for tests to see what (if 
anything) was wrong with it.


After I sent that last email I googled around for how to write ghc unit 
tests and this 
 
is the only thing I found.  Is it not possible to unit test GHC? If not 
are there plans/discussions about this?  I think it'd help document the 
code base if nothing else and it'd be a good way to get my feet wet.


On 10/11/2016 02:13 PM, Ben Gamari wrote:

Thomas Jakway  writes:


Can anyone point me to the register allocator tests (especially for the
graph register allocator)?  Can't seem to find them and grepping doesn't
turn up much (pretty much just
testsuite/tests/codeGen/should_run/cgrun028.h).


What sort of tests are you looking for in particular? I'm afraid all we
have are regression tests covering the code generator as a whole.

Cheers,

- Ben


___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Register Allocator Tests

2016-10-11 Thread Ben Gamari
Thomas Jakway  writes:

> Can anyone point me to the register allocator tests (especially for the 
> graph register allocator)?  Can't seem to find them and grepping doesn't 
> turn up much (pretty much just 
> testsuite/tests/codeGen/should_run/cgrun028.h).
>
What sort of tests are you looking for in particular? I'm afraid all we
have are regression tests covering the code generator as a whole.

Cheers,

- Ben


signature.asc
Description: PGP signature
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: qualified module export

2016-10-11 Thread Iavor Diatchki
Hello,

There may be some more thinking to be done on the design of this feature.
In particular, if a module `M` has en export declaration `module T`, this
is not at all the same as adding `import T` in modules exporting `M`.  The
reason is that meaning of `module T` depends on what is in scope in `M` and
with what names.For example:
   * `module T` may export only some of the names from `T` (e.g. if `M`
contains `import T(onlyThisName)`); or,
   * `module T` may export the names from an entirely different module
(e.g., if `M` contains `import S as T`); or,
   * `module T` may export a combination of multiple modules (e.g., if `M`
contains `import S1 as T` and `import S2 as T`).

So, I would expect an export of the form `qualified module T as X` to work
in a similar fashion (for the full details on the current semantics you
could have a look at [1]).

The next issue would be that, currently, entities exported by a module are
only identified by an unqualified name, and the imports introduce qualified
names as necessary.  It might make sense to allow modules to also export
qualified names instead, but then we'd have to decide what happens on the
importing end.  Presumably, a simple `import M` would now bring both some
qualified and some unqualified names.  This means that the explicit import
and hiding lists would have to support qualified names, seems doable.
However, we'd also have to decide how `import M as X` works, in particular
how does it affect imported qualified names.  One option would be to have
`X` replace the qualifier, so if `A.b` is imported via `import M as X`, the
resulting name would be `X.b`.  Another option would be to have `X` extend
the qualifier, so `A.b` would become `X.A.b` locally.  Neither seems
perfect:  the first one is somewhat surprising, where you might
accidentally "overwrite" a qualifier and introduce name conflicts; the
second does not allow exported qualified names to ever get shorter.

I hope this is helpful,
-Iavor

[1] http://yav.github.io/publications/modules98.pdf


On Tue, Oct 11, 2016 at 8:54 AM, Karl Cronburg  wrote:

> Hello,
>
> I'm attempting to add support for export of qualified modules (feature
> request #8043), and any guidance would be greatly appreciated. Namely I'm
> very familiar with languages / grammars / happy and was easily able to add
> an appropriate production alternative to Parser.y to construct a new AST
> node when 'qualified module' is seen in the export list, i.e.:
>
> |  'module' modid {% amsu (sLL $1 $> (IEModuleContents $2))
>   [mj AnnModule $1] }
> |  'qualified' 'module' qconid --maybeas
>   {% amsu (sLL $2 $> (IEModuleQualified $3))
>   [mj AnnQualified $1] }
>
> But now I'm lost in the compiler internals. Where should I be looking /
> focusing on? In particular:
>
> - Where do exported identifiers get added to the list of "[LIE Name]" in
> ExportAccum (in TcRnExports.hs)?
>
> Thanks,
> -Karl Cronburg-
>
> ___
> ghc-devs mailing list
> ghc-devs@haskell.org
> http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>
>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


qualified module export

2016-10-11 Thread Karl Cronburg
Hello,

I'm attempting to add support for export of qualified modules (feature
request #8043), and any guidance would be greatly appreciated. Namely I'm
very familiar with languages / grammars / happy and was easily able to add
an appropriate production alternative to Parser.y to construct a new AST
node when 'qualified module' is seen in the export list, i.e.:

|  'module' modid {% amsu (sLL $1 $> (IEModuleContents $2))
  [mj AnnModule $1] }
|  'qualified' 'module' qconid --maybeas
  {% amsu (sLL $2 $> (IEModuleQualified $3))
  [mj AnnQualified $1] }

But now I'm lost in the compiler internals. Where should I be looking /
focusing on? In particular:

- Where do exported identifiers get added to the list of "[LIE Name]" in
ExportAccum (in TcRnExports.hs)?

Thanks,
-Karl Cronburg-
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reading floating point

2016-10-11 Thread Brandon Allbery
On Tue, Oct 11, 2016 at 10:41 AM, Carter Schonwald <
carter.schonw...@gmail.com> wrote:

> Could you elaborate or point me to where this philosophy is articulated in
> commentary in base or in the language standards ?


https://www.haskell.org/onlinereport/haskell2010/haskellch11.html#x18-18600011.4
is instructive, insofar as one can assume the restrictions it quotes that
do not agree with the semantics of Haskell imply a philosophy.

-- 
brandon s allbery kf8nh   sine nomine associates
allber...@gmail.com  ballb...@sinenomine.net
unix, openafs, kerberos, infrastructure, xmonadhttp://sinenomine.net
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs


Re: Reading floating point

2016-10-11 Thread Carter Schonwald
Could you elaborate or point me to where this philosophy is articulated in
commentary in base or in the language standards ?

On Monday, October 10, 2016, David Feuer  wrote:

> It may currently be true for floats, but it's never been true in general,
> particularly with regard to records. Read is not actually designed to parse
> Haskell; it's for parsing "Haskell-like" things. Because it, unlike a true
> Haskell parser, is type-directed, there are somewhat different trade-offs.
>
> On Oct 11, 2016 1:50 AM, "Carter Schonwald"  > wrote:
>
>> How is that not a bug? We should be able to read back floats
>>
>> On Monday, October 10, 2016, David Feuer > > wrote:
>>
>>> It doesn't, and it never has.
>>>
>>> On Oct 10, 2016 6:08 PM, "Carter Schonwald" 
>>> wrote:
>>>
 Read should accept exactly the valid source literals for a type.

 On Monday, October 10, 2016, David Feuer  wrote:

> What does any of that have to do with the Read instances?
>
> On Oct 10, 2016 1:56 PM, "Carter Schonwald" <
> carter.schonw...@gmail.com> wrote:
>
>> The right solution is to fix things so we have scientific notation
>> literal rep available.  Any other contortions run into challenges in
>> repsentavility of things.  That's of course ignoring denormalized floats,
>> infinities, negative zero and perhaps nans.
>>
>> At the very least we need to efficiently and safely support
>> everything but nan. And I have some ideas for that I hope to share soon.
>>
>> On Monday, October 10, 2016, David Feuer 
>> wrote:
>>
>>> I fully expect this to be somewhat tricky, yes. But some aspects of
>>> the current implementation strike me as pretty clearly non-optimal. 
>>> What I
>>> meant about going through Rational is that given "625e-5", say, it
>>> calculates 625%10, producing a fraction in lowest terms, before 
>>> calling
>>> fromRational, which itself invokes fromRat'', a division function 
>>> optimized
>>> for a special case that doesn't seem too relevant in this context. I 
>>> could
>>> be mistaken, but I imagine even reducing to lowest terms is useless 
>>> here.
>>> The separate treatment of the digits preceding and following the decimal
>>> point doesn't do anything obviously useful either. If we (effectively)
>>> normalize in decimal to an integral mantissa, for example, then we can
>>> convert the whole mantissa to an Integer at once; this will balance the
>>> merge tree better than converting the two pieces separately and 
>>> combining.
>>>
>>> On Oct 10, 2016 6:00 AM, "Yitzchak Gale"  wrote:
>>>
>>> The way I understood it, it's because the type of "floating point"
>>> literals is
>>>
>>> Fractional a => a
>>>
>>> so the literal parser has no choice but to go via Rational. Once you
>>> have that, you use the same parser for those Read instances to ensure
>>> that the result is identical to what you would get if you parse it as
>>> a literal in every case.
>>>
>>> You could replace the Read parsers for Float and Double with much
>>> more
>>> efficient ones. But you would need to provide some other guarantee of
>>> consistency with literals. That would be more difficult to achieve
>>> than one might think - floating point is deceivingly tricky. There
>>> are
>>> already several good parsers in the libraries, but I believe all of
>>> them can provide different results than literals in some cases.
>>>
>>> YItz
>>>
>>> On Sat, Oct 8, 2016 at 10:27 PM, David Feuer 
>>> wrote:
>>> > The current Read instances for Float and Double look pretty iffy
>>> from an
>>> > efficiency standpoint. Going through Rational is exceedingly
>>> weird: we have
>>> > absolutely nothing to gain by dividing out the GCD, as far as I
>>> can tell.
>>> > Then, in doing so, we read the digits of the integral part to form
>>> an
>>> > Integer. This looks like a detour, and particularly bad when it
>>> has many
>>> > digits. Wouldn't it be better to normalize the decimal
>>> representation first
>>> > in some fashion (e.g., to 0.xxexxx) and go from there?
>>> Probably less
>>> > importantly, is there some way to avoid converting the mantissa to
>>> an
>>> > Integer at all? The low digits may not end up making any difference
>>> > whatsoever.
>>> >
>>> >
>>> > ___
>>> > ghc-devs mailing list
>>> > ghc-devs@haskell.org
>>> > http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs
>>> >
>>>
>>>
>>>
___
ghc-devs mailing list
ghc-devs@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs