Re: Problems with installing dph

2013-08-13 Thread Ben Lippmeier

On 13/08/2013, at 9:40 PM, Jan Clemens Gehrke wrote:

 Hi Glasgow-Haskell-Users, 
 
 I'm trying to get started with DPH and have some problems. 
 If i try getting DPH with 
 cabal install dph-examples 
 I get to warnings numerous times. The first warning is: 
 
 You are using a new version of LLVM that hasn't been tested yet! 
 We will try though... 
 
 and the second one: 
 
 Warning: vectorisation failure: identityConvTyCon: type constructor contains 
 parallel arrays [::] 
   Could NOT call vectorised from original version 

You can safely ignore this.


 Cabal finishes with: 
 
 Installing executable(s) in /home/clemens/.cabal/bin 
 Installed dph-examples-0.7.0.5 
 
 If i try compiling the first example from 
 http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell 
 with 
 ghc -c -Odph -fdph-par DotP.hs 
 i get 
 ghc: unrecognised flags: -fdph-par 

The wiki page is old and badly needs updating. We removed the -fdph-par flag 
about a year ago.

Check the dph-examples packages for the correct compiler flags to use, eg:

-eventlog -rtsopts -threaded -fllvm -Odph -package dph-lifted-vseg -fcpr-off 
-fsimpl-tick-factor=1000

Also note that DPH is still an experimental voyage into theoretical computer 
science. It should compile programs, and you should be able to run them, but 
they won't be fast enough to solve any of your actual problems.

Ben.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC 7.8 release?

2013-02-07 Thread Ben Lippmeier

On 08/02/2013, at 5:15 AM, Simon Peyton-Jones wrote:

 So perhaps we principally need a way to point people away from GHC and 
 towards HP?  eg We could prominently say at every download point “Stop!  Are 
 you sure you want this?  You might be better off with the Haskell Platform!  
 Here’s why...”.

Right now, the latest packages uploaded to Hackage get built with ghc-7.6 
(only), and all the pages say Built on ghc-7.6. By doing this we force *all* 
library developers to run GHC 7.6. I think this sends the clearest message 
about what the real GHC version is. 

We'd have more chance of turning Joe User off the latest GHC release if Hackage 
was clearly split into stable/testing channels. Linux distros have been doing 
this for years.

Ben.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Does GHC still support x87 floating point math?

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 12:12 , Johan Tibell wrote:

 I'm currently trying to implement word2Double#. Other such primops
 support both x87 and sse floating point math. Do we still support x87
 fp math? Which compiler flag enables it?

It's on by default unless you use the -sse2 flag. The x87 support is horribly 
slow though. I don't think anyone would notice if you deleted the x87 code and 
made SSE the default, especially now that we have the LLVM code generator. SSE 
has been the way to go for over 10 years now.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: The end of an era, and the dawn of a new one

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 3:56 , Simon Peyton-Jones wrote:

 Particularly valuable are offers to take responsibility for a
 particular area (eg the LLVM code generator, or the FFI).  I'm
 hoping that this sea change will prove to be quite empowering,
 with GHC becoming more and more a community project, more
 resilient with fewer single points of failure. 

The LLVM project has recently come to the same point. The codebase has become 
too large for Chris Lattner to keep track of it all, so they've moved to a 
formal Code Ownership model. People own particular directories of the code 
base, and the code owners are expected to review patches for those directories.

The GHC project doesn't have a formal patch review process, I think because the 
people with commit access on d.h.o generally know who owns what. Up until last 
week I think it was SPJ owns the type checker and simplifier, and SM owns 
everything else. :-)

At this stage, I think it would help if we followed the LLVM approach of having 
a formal CODE_OWNERS file in the root path of the repo explicitly listing the 
code owners. That way GHC HQ knows what's covered and what still needs a 
maintainer. The LLVM version is here [1].

Code owners would:
1) Be the go-to person when other developers have questions about that code.
2) Fix bugs in it that no-one else has claimed.
3) Generally keep the code tidy, documented and well-maintained.

Simon: do you want a CODE_OWNERS file? If so then I can start it. I think it's 
better to have it directly in the repo than on the wiki, that way no-one that 
works on the code can miss it.

I suppose I'm the default owner of the register allocators and non-LLVM native 
code generators.

Ben.

[1] http://llvm.org/viewvc/llvm-project/llvm/trunk/CODE_OWNERS.TXT?view=markup




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: The end of an era, and the dawn of a new one

2012-12-06 Thread Ben Lippmeier

On 07/12/2012, at 4:21 , Ian Lynagh wrote:

 On Thu, Dec 06, 2012 at 09:56:55PM +1100, Ben Lippmeier wrote:
 
 I suppose I'm the default owner of the register allocators and non-LLVM 
 native code generators.
 
 Great, thanks!
 
 By the way, if you feel like doing some hacking this holiday season,
 then you might be interested in
http://hackage.haskell.org/trac/ghc/ticket/7063

Ah, holidays. Finally I'll have time to get some work done... :-)

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: GHC Performance Tsar

2012-12-04 Thread Ben Lippmeier

On 01/12/2012, at 1:42 AM, Simon Peyton-Jones wrote:

 |  While writing a new nofib benchmark today I found myself wondering
 |  whether all the nofib benchmarks are run just before each release,
 
 I think we could do with a GHC Performance Tsar.  Especially now that Simon 
 has changed jobs, we need to try even harder to broaden the base of people 
 who help with GHC.  It would be amazing to have someone who was willing to:
 
 * Run nofib benchmarks regularly, and publish the results
 
 * Keep baseline figures for GHC 7.6, 7.4, etc so we can keep
   track of regressions
 
 * Investigate regressions to see where they come from; ideally
   propose fixes.
 
 * Extend nofib to contain more representative programs (as Johan is
   currently doing).
 
 That would help keep us on the straight and narrow.  


I was running a performance regression buildbot for a while a year ago, but 
gave it up because I didn't have time to chase down the breakages. At the time 
we were primarily worried about the asymptotic performance of DPH, and fretting 
about a few percent absolute performance was too much of a distraction. 

However: if someone wants to pick this up then they may get some use out of the 
code I wrote for it. The dph-buildbot package in the DPH repository should 
still compile. This package uses 
http://hackage.haskell.org/package/buildbox-1.5.3.1 which includes code for 
running tests, collecting the timings, comparing against a baseline, making 
pretty reports etc. There is then a second package buildbox-tools which has a 
command line tool for listing the benchmarks that have deviated from the 
baseline by a particular amount.

Here is an example of a report that dph-buildbot made: 

http://log.ouroborus.net/limitingfactor/dph/nightly-20110809_000147.txt

Ben.




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: parallel garbage collection performance

2012-06-18 Thread Ben Lippmeier

On 19/06/2012, at 24:48 , Tyson Whitehead wrote:

 On June 18, 2012 04:20:51 John Lato wrote:
 Given this, can anyone suggest any likely causes of this issue, or
 anything I might want to look for?  Also, should I be concerned about
 the much larger gc_alloc_block_sync level for the slow run?  Does that
 indicate the allocator waiting to alloc a new block, or is it
 something else?  Am I on completely the wrong track?
 
 A total shot in the dark here, but wasn't there something about really bad 
 performance when you used all the CPUs on your machine under Linux?
 
 Presumably very tight coupling that is causing all the threads to stall 
 everytime the OS needs to do something or something?

This can be a problem for data parallel computations (like in Repa). In Repa 
all threads in the gang are supposed to run for the same time, but if one gets 
swapped out by the OS then the whole gang is stalled.

I tend to get best results using -N7 for an 8 core machine. 

It is also important to enable thread affinity (with the -qa) flag. 

For a Repa program on an 8 core machine I use +RTS -N7 -qa -qg

Ben.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: parallel garbage collection performance

2012-06-18 Thread Ben Lippmeier

On 19/06/2012, at 10:59 , Manuel M T Chakravarty wrote:

 I wonder, do we have a Repa FAQ (or similar) that explain such issues? (And 
 is easily discoverable?)

I've been trying to collect the main points in the haddocs for the main module 
[1], but this one isn't there yet.

I need to update the Repa tutorial, on the Haskell wiki, and this should also 
go in it

Ben.

[1] 
http://hackage.haskell.org/packages/archive/repa/3.2.1.1/doc/html/Data-Array-Repa.html


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: parallel garbage collection performance

2012-06-18 Thread Ben Lippmeier

On 19/06/2012, at 13:53 , Ben Lippmeier wrote:

 
 On 19/06/2012, at 10:59 , Manuel M T Chakravarty wrote:
 
 I wonder, do we have a Repa FAQ (or similar) that explain such issues? (And 
 is easily discoverable?)
 
 I've been trying to collect the main points in the haddocs for the main 
 module [1], but this one isn't there yet.
 
 I need to update the Repa tutorial, on the Haskell wiki, and this should also 
 go in it


I also added thread affinity to the Repa FAQ [1].

Ben.

[1] http://repa.ouroborus.net/




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: What do the following numbers mean?

2012-04-02 Thread Ben Lippmeier

On 02/04/2012, at 10:10 PM, Jurriaan Hage wrote:
 Can anyone tell me what the exact difference is between
  1,842,979,344 bytes maximum residency (219 sample(s))
 and
   4451 MB total memory in use (0 MB lost due to fragmentation)
 
 I could not find this information in the docs anywhere, but I may have missed 
 it.

The maximum residency is the peak amount of live data in the heap. The total 
memory in use is the peak amount that the GHC runtime requested from the 
operating system. Because the runtime system ensures that the heap is always 
bigger than the size of the live data, the second number will be larger.

The maximum residency is determined by performing a garbage collection, which 
traces out the graph of live objects. This means that the number reported may 
not be the exact peak memory use of the program, because objects could be 
allocated and then become unreachable before the next sample. If you want a 
more accurate number then increase the frequency of the heap sampling with the 
-isec RTS flag.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Build failure of syb-with-class with ghc-7.2.1

2011-08-09 Thread Ben Lippmeier

On 09/08/2011, at 23:15 , Sergei Trofimovich wrote:

 the HEAD of syb-with-class fails with the following error when build
 with ghc-7.2.1 and template-haskell-2.6:
 
 http://code.google.com/p/syb-with-class/issues/detail?id=4
 
 Is this a bug in TH?
 
 Very likely:
http://hackage.haskell.org/trac/ghc/ticket/5362

In TH code you now need to use mkName at variable uses instead of the names 
created directly with newName. Repa had a similar problem.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: repa: fromVector

2011-05-19 Thread Ben Lippmeier

On 19/05/2011, at 8:27 PM, Christian Höner zu Siederdissen wrote:

 I'd like to use repa in a rather perverted mode, I guess:
 
 for my programs I need to be able to update arrays in place and
 repeatedly perform operations on them.
 Right now, it basically works like this (in ST):
 
 - create unboxed space using primitive (same as unboxed vectors)
 - unsafefreeze unboxed space
 - perform calculations on frozen, immutable space
 - write result into mutable space (which is shared with the unsafefrozen
  space)

If you care deeply about inplace update, then you could use the parallel array 
filling functions directly. The ones in  D.A.Repa.Internals.Eval*.hs. For 2D 
images, use the fillVectorBlockwiseP [1] or fillCursoredBlock2P.


fillVectorBlockwiseP 
:: Elt a
= IOVector a   -- ^ vector to write elements into
- (Int - a)   -- ^ fn to evaluate an element at the given 
index
- Int  -- ^ width of image.
- IO ()


-- | Fill a block in a 2D image, in parallel.
--   Coordinates given are of the filled edges of the block.
--   We divide the block into columns, and give one column to each thread.
fillCursoredBlock2P
:: Elt a
= IOVector a   -- ^ vector to write elements into
- (DIM2   - cursor)   -- ^ make a cursor to a particular 
element
- (DIM2   - cursor - cursor) -- ^ shift the cursor by an offset
- (cursor - a)-- ^ fn to evaluate an element at the 
given index.
- Int  -- ^ width of whole image
- Int  -- ^ x0 lower left corner of block to fill
- Int  -- ^ y0 (low x and y value)
- Int  -- ^ x1 upper right corner of block to fill
- Int  -- ^ y1 (high x and y value, index of last elem 
to fill)
- IO ()


Actually, it might be worthwhile exporting these in the API anyway.

[1] 
http://code.ouroborus.net/repa/repa-head/repa/Data/Array/Repa/Internals/EvalBlockwise.hs
[2] 
http://code.ouroborus.net/repa/repa-head/repa/Data/Array/Repa/Internals/EvalCursored.hs



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: New codegen failing test-cases

2010-12-05 Thread Ben Lippmeier

On 06/12/2010, at 1:19 PM, David Terei wrote:

 I haven't looked at these branches for a fair few weeks, the problem
 when they fail to build usually is because all the libraries are just
 set to follow HEAD, they're not actually branched themselves, just the
 ghc compiler. So there are probably some patches from ghc HEAD that
 need to be pulled in to sync the compiler with the libs again. If you
 want to do some work on the new codegen the first step is to try to
 pull in all the patches from ghc HEAD, synchronise the branch. Its not
 a fun job but GHC HQ wants to try to merge in all the new codegen
 stuff to HEAD asap.
 
 libraries/dph/dph-par/../dph-common/Data/Array/Parallel.hs:1:14:
Unsupported extension: ParallelArrays
 make[1]: *** [libraries/dph/dph-par/dist-install/build/.depend-v.haskell] 
 Error 1
 make: *** [all] Error 2
 
 I can debug this further if you want me to.

We renamed the -XPArr language flag to -XParallelArrays. There was a patch to 
ghc-head that you'll have to pull or port across.

We're still actively working on DPH, and changes to the compiler often entail 
changes to the libraries.  If you haven't branched the libraries then your 
build is going to break on a weekly basis.

Ben.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Deadlock with Repa

2010-07-29 Thread Ben Lippmeier

On 27/07/2010, at 11:24 PM, Jean-Marie Gaillourdet wrote:

 I've been trying to use repa and stumbled into rather strange behavior of my 
 program. Sometimes, there seems to be a deadlock at other times it seems 
 there is a livelock. Now, I've managed to prepare a version which 
 consistently generates the following output: 
 
 $ ./MDim
 MDim: thread blocked indefinitely in an MVar operation
 $
 
 But I don't use any MVar directly.  And the only used library which is not 
 part of ghc are repa and repa-algorithms. To state it clearly I don't use any 
 MVars, par, pseq, forkIO nor any other parallelism or cuncurrency 
 functionality. The only thing my program uses is repa which is supposed to 
 use some kind of parallelism as far as the documentation says. So I am 
 wondering whether this is a problem with my code or with repa or with ghc.

This is a symptom of not having calls to force in the right place. Suppose 
you've created a thunk that sparks of a parallel computation. If if some other 
parallel computation tries to evaluate it, then you've got nested parallelism. 
Operationally, it means that there was already a gang of threads doing 
something, but you tried to create a new one. 

The error message is poor, and we should really document it on the wik. 
However, if you get this message then the program should still give the correct 
result. If it's really deadlocking then it's a bug.


 I'd be really happy if anyone could give me a hint how to debug this, or 
 whether I am able to do anything about it, at all. 

You'll want to add more calls to force to ensure that appropriate 
intermediate arrays are in manifest form. Using seq and deepSeqArray can also 
help.

BTW: The Hackage repa package runs about 10x slower than it should against the 
current head, due to some changes in the inliner. I'm updating the package over 
the next few days, and I can also have a go at your example.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel Haskell: 2-year project to push real world use

2010-05-03 Thread Ben Lippmeier

You can certainly create an array with these values, but in the provided code 
it looks like each successive array element has a serial dependency on the 
previous two elements. How were you expecting it to parallelise?

Repa arrays don't support visible destructive update. For many algorithms you 
should't need it, and it causes problems for parallelisation.

I'm actively writing more Repa examples now.  Can you sent me some links 
explaining the algorithm that you're using, and some example data + output?

Thanks,
Ben.



On 04/05/2010, at 9:21 AM, Christian Höner zu Siederdissen wrote:

   a = array (1,10) [ (i,f i) | i -[1..10]] where
  f 1 = 1
  f 2 = 1
  f i = a!(i-1) + a!(i-2)
 
 (aah, school ;)
 
 Right now, I am abusing vector in ST by doing this:
 
 a - new
 a' - freeze a
 forM_ [3..10] $ \i - do
  write a (a'!(i-1) + a!(i-2))
 
 Let's say I wanted to do something like this in dph (or repa), does that
 work? We are actually using this for RNA folding algorithms that are at
 least O(n^3) time. For some of the more advanced stuff, it would be
 really nice if we could just parallelize.
 
 To summarise: I need arrays that allow in-place updates.
 
 Otherwise, most libraries that do heavy stuff (O(n^3) or worse) are
 using vector right now. On a single core, it performs really great --
 even compared to C-code that has been optimized a lot.
 
 Thanks and Viele Gruesse,
 Christian

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel Haskell: 2-year project to push real world use

2010-05-03 Thread Ben Lippmeier

On 03/05/2010, at 10:04 PM, Johan Tibell wrote:

 On Mon, May 3, 2010 at 11:12 AM, Simon Peyton-Jones simo...@microsoft.com 
 wrote:
 | Does this mean DPH is ready for abuse?
 |
 | The wiki page sounds pretty tentative, but it looks like it's been awhile
 | since it's been updated.
 |
 | http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell
 
 In truth, nested data parallelism has taken longer than we'd hoped to be 
 ready for abuse :-).   We have not lost enthusiasm though -- Manual, Roman, 
 Gabi, Ben, and I talk on the phone each week about it.  I think we'll have 
 something usable by the end of the summer.
 
 That's very encouraging! I think people (me included) have gotten the 
 impression that the project ran into problems so challenging that it stalled. 
 Perhaps a small status update once in a while would give people a better idea 
 of what's going on. :)
 

I'm currently working full time on cleaning up Repa and adding more examples. 

I'll do a proper announcement on the mailing lists once I've got the wiki set 
up. It would have been today but community.haskell.org was flaking out 
yesterday.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Status of GHC as a Cross Compiler

2009-09-24 Thread Ben Lippmeier


No, GHC won't be a native cross compiler in 6.12. There are #ifdefs  
through the code which control what target architecture GHC is being  
compiled for, and at the moment it doesn't support the host  
architecture being different from the target architecture.


I did some work on the native code generator this year which cleans up  
some of this, but it still needs several more weeks put into it to  
make it a real cross compiler.


Cheers,
Ben.


On 24/09/2009, at 5:24 AM, Donnie Jones wrote:


Hello John,

glasgow-haskell-users is a more appropriate list...
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

I went ahead and cc'd your message to the list.  Any replies please
include John's email address as I don't think he is subscribed to the
list.

Hope that helps...
--
Donnie Jones

On Wed, Sep 23, 2009 at 1:50 PM, John Van Enk vane...@gmail.com  
wrote:

Hi,

This may be more appropriate for a different list, but I'm having a  
hard
time figuring out whether or not we're getting a cross compiler in  
6.12 or

not. Can some one point me to the correct place in Traq to find this
information?

/jve

___
Haskell-Cafe mailing list
haskell-c...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
haskell-c...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ThreadScope: Request for features for the performance tuning of parallel and concurrent Haskell programs

2009-03-11 Thread Ben Lippmeier


Hi Satnam,

On 12/03/2009, at 12:24 AM, Satnam Singh wrote:
Before making the release I thought it would be an idea to ask  
people what other features people would find useful or performance  
tuning. So if you have any suggestions please do let us know!




Is it available in a branch somewhere to try out?

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Type (class) recursion + families = exponential compile time?

2009-02-26 Thread Ben Lippmeier


Here's the reference
http://portal.acm.org/citation.cfm?id=96748

Deciding ML typability is complete for deterministic exponential  
time -- Harry G. Mairson.


Ben.


On 27/02/2009, at 10:12 AM, Ben Franksen wrote:


Hi

the attached module is a much reduced version of some type-level  
assurance
stuff (inspired by the Lightweight Monadic Regions paper) I am  
trying to
do. I am almost certain that it could be reduced further but it is  
late and

I want to get this off my desk.

Note the 4 test functions, test11 .. test14. The following are  
timings for
compiling the module only with all test functions commented out,  
except

respectively, test11, test12, test13, and test14:

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  1,79s user 0,04s system 99% cpu 1,836 total

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  5,87s user 0,14s system 99% cpu 6,028 total

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  23,52s user 0,36s system 99% cpu 23,899 total

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  102,20s user 1,32s system 97% cpu 1:45,89 total

It seems something is scaling very badly. You really don't want to  
wait for

a version with 20 levels of nesting to compile...

If anyone has a good explanation for this, I'd be grateful.

BTW, I am not at all certain that this is ghc's fault, it may well  
be my
program, i.e. the constraints are too complex, whatever. I have no  
idea how
hard it is for the compiler to do all the unification. Also, the  
problem is
not of much practical relevance, as no sensible program will use  
more than

a handfull levels of nesting.

Cheers
Ben
Bug2.hs___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: SHA1.hs woes, was Version control systems

2008-08-19 Thread Ben Lippmeier


On 19/08/2008, at 8:57 PM, Ian Lynagh wrote:


On Mon, Aug 18, 2008 at 09:20:54PM +1000, Ben Lippmeier wrote:


Ian: Did this problem result in Intel CC / GCC register allocator
freakouts?


Have you got me confused with someone else? I don't think I've ever  
used

Intel CC.



Sorry, I couldn't find the rest of the preceding message. Someone  
wrote that they had to turn down cc flags to get SHA1.hs to compile on  
IA64.


What C compiler was being used, and what were the symptoms?

SHA1.hs creates vastly more register pressure than any other code I  
know of (or could find), but only when -O or -O2 is enabled in GHC. If  
-O and -prof are enabled then the linear allocator runs out of stack  
slots (last time I checked).


I'm wondering three things:

1) If the C compiler could not compile the C code emitted by GHC then  
maybe we should file a bug report with the CC people.


2) If the register pressure in SHA1.hs is more due to excessive code  
unfolding than the actual SHA algorithm, then maybe this should be  
treated as a bug in the simplifier(?) (sorry, I'm not familiar with  
the core level stuff)


3) Ticket #1993 says that the linear allocator runs out of stack  
slots, and the graph coloring allocator stack overflows when trying to  
compile SHA1.hs with -funfolding-use-threshold20. I'm a bit worried  
about the stack over-flow part.


The graph size is O(n^2) in the number of vreg conflicts, which isn't  
a problem for most code. However, if register pressure in SHA1.hs is  
proportional to the unfolding threshold (especially if more than  
linearly) then you could always blow up the graph allocator by setting  
the threshold arbitrarily high.


In this case maybe the allocator should give a warning when the  
pressure is high and suggest turning the threshold down. Then we could  
close this issue and prevent it from being re-opened.


Cheers,
Ben.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Version control systems

2008-08-18 Thread Ben Lippmeier


On 18/08/2008, at 8:13 PM, Simon Marlow wrote:
So would I usually, though I've had to turn down cc flags to get  
darcs
to build on ia64 before (SHA1.hs generates enormous register  
pressure).


We should really use a C implementation of SHA1, the Haskell version  
isn't buying us anything beyond being a stress test of the register  
allocator.




.. and perhaps a test case for too much code unfolding in GHC? Sounds  
like bugs to me. :)


If you turn down GHC flags the pressure also goes away.

Ian: Did this problem result in Intel CC / GCC register allocator  
freakouts?


Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: ghc -O

2008-01-23 Thread Ben Lippmeier

Hi Matt,
No, I don't think anything extra is done in the native code generator 
when -O or -O2 are enabled. I believe that strictness information is 
used in the core stages only, eg to change let bindings (which allocate 
thunks) to case expressions (which don't) - but I haven't worked on the 
GHC core so I'm not that familiar with it.


There's a plan to turn on the iterative register allocator that I wrote 
with -O2, but the STG to C-- translation needs to be improved before 
that will have much effect on performance. This is currently being 
completely overhauled by Norman Ramsey and co, so perhaps in 6.10.


Cheers,
Ben.


Matthew Naylor wrote:

Hello GHC gurus,

specifying -O or -O2 to GHC enables various optimisations to the
frontend of the compiler, but I wonder does it turn on any
optimistions to the backend/code-generator that are independent of the
frontend?

I can imagine that knowledge such as strictness which is computed by
frontend would enable optimisations in the backend, but I'm more
interested in whether the backend would do anything independent (e.g.
peephole optimistion, C compiler options) with -O but not without.

Thanks,

Matt.
___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
  


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users