Re: [Haskell-cafe] Compiler stops at SpecConstr optimization

2013-08-29 Thread Ben Lippmeier

On 30/08/2013, at 2:38 AM, Daniel Díaz Casanueva wrote:

 While hacking in one of my projects, one of my modules stopped to compile for 
 apparently no reason. The compiler just freezes (like if it where in an 
 infinite loop) while trying to compile that particular module. Since I had 
 this problem I have been trying to reduce the problem as much as I could, and 
 I came out with this small piece of code:
 
  module Blah (foo) where
 
  import Data.Vector (Vector)
  import qualified Data.Vector as V
 
  foo :: (a - a) - Vector a - Vector a
  foo f = V.fromList . V.foldl (\xs x - f x : xs) []

Probably an instance of this one:

http://ghc.haskell.org/trac/ghc/ticket/5550

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [haskell.org Google Summer of Code 2013] Approved Projects

2013-05-28 Thread Ben Lippmeier

On 29/05/2013, at 1:11 AM, Edward Kmett wrote:

 This unfortunately means, that we can't really show the unaccepted proposals 
 with information about how to avoid getting your proposal rejected.

You can if you rewrite the key points of proposal to retain the overall 
message, but remove identifying information. I think it would be helpful to 
write up some of the general reasons for projects being rejected.

I tried to do this for Haskell experience reports, on the Haskell Symposium 
experience report advice page.
 http://www.haskell.org/haskellwiki/HaskellSymposium/ExperienceReports


I'd imagine you could write up some common proposal / rejection / advice tuples 
like:

Proposal: I want to write a MMORPG in Haskell, because this would be a good 
demonstration for Haskell in a large real world project. We can use this as a 
platform to develop the networking library infrastructure.

Rejection: This project is much too big, and the production of a MMORPG 
wouldn't benefit the community as a whole.

Advice: If you know of specific problems in the networking library 
infrastructure, then focus on those, using specific examples of where people 
have tried to do something and failed.


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Ben Lippmeier

On 25/04/2013, at 3:47 AM, Duncan Coutts wrote:

 It looks like fold and unfold fusion systems have dual limitations:
 fold-based fusion cannot handle zip style functions, while unfold-based
 fusion cannot handle unzip style functions. That is fold-based cannot
 consume multiple inputs, while unfold-based cannot produce multiple
 outputs.

Yes. This is a general property of data-flow programs and not just compilation 
via Data.Vector style co-inductive stream fusion, or a property of 
fold/unfold/hylo fusion. 


Consider these general definitions of streams and costreams.

-- A stream must always produce an element.
type Stream a   = IO a

-- A costream must always consume an element.
type CoStream a = a - IO ()


And operators on them (writing S for Stream and C for CoStream).

-- Versions of map.
map :: (a - b) - S a - S b(ok)
comap   :: (a - b) - C b - C a(ok)

-- Versions of unzip.
unzip   :: S (a, b) - (S a, S b)(bad)
counzip :: C a - C b - C (a, b)(ok)
unzipc  :: S (a, b) - C b - S a(ok)

-- Versions of zip.
zip :: S a - S b - S (a, b)(ok)
cozip   :: C (a, b) - (C a, C b)(bad)
zipc:: C (a, b) - S a - C b(ok)



The operators marked (ok) can be implemented without buffering data, while the 
combinators marked (bad) may need an arbitrary sized buffer.

Starting with 'unzip', suppose we pull elements from the first component of the 
result (the (S a)) but not the second component (the (S b)). To provide these 
'a' elements, 'unzip' must pull tuples from its source stream (S (a, b)) and 
buffer the 'b' part until someone pulls from the (S b).

Dually, with 'cozip', suppose we push elements into the first component of the 
result (the (C a)). The implementation must buffer them until someone pushes 
the corresponding element into the (C b), only then can it push the whole tuple 
into the source (C (a, b)) costream.


The two combinators unzipc and zipc are hybrids:

For 'unzipc', if we pull an element from the (S a), then the implementation can 
pull a whole (a, b) tuple from the source (S (a, b)) and then get rid of the 
'b' part by pushing it into the (C b). The fact that it can get rid of the 'b' 
part means it doesn't need a buffer.

Similarly, for 'zipc', if we push a 'b' into the (C b) then the implementation 
can pull the corresponding 'a' part from the (S a) and then push the whole (a, 
b) tuple into the C (a, b). The fact that it can get the corresponding 'a' 
means it doesn't need a buffer.

I've got some hand drawn diagrams of this if anyone wants them (mail me), but 
none of it helps implement 'unzip' for streams or 'cozip' for costreams. 



 I'll be interested to see in more detail the approach that Ben is
 talking about. As Ben says, intuitively the problem is that when you've
 got multiple outputs so you need to make sure that someone is consuming
 them and that that consumption is appropriately synchronised so that you
 don't have to buffer (buffering would almost certainly eliminate the
 gains from fusion). That might be possible if ultimately the multiple
 outputs are combined again in some way, so that overall you still have a
 single consumer, that can be turned into a single lazy or eager loop.


At least for high performance applications, I think we've reached the limit of 
what short-cut fusion approaches can provide. By short cut fusion, I mean 
crafting a special source program so that the inliner + simplifier + 
constructor specialisation transform can crunch down the intermediate code into 
a nice loop. Geoff Mainland's recent paper extended stream fusion with support 
for SIMD operations, but I don't think stream fusion can ever be made to fuse 
programs with unzip/cozip-like operators properly. This is a serious problem 
for DPH, because the DPH vectoriser naturally produces code that contains these 
operators.

I'm currently working on Repa 4, which will include a GHC plugin that hijacks 
the intermediate GHC core code and performs the transformation described in 
Richard Water's paper Automatic transformation of series expressions into 
loops. The plugin will apply to stream programs, but not affect the existing 
fusion mechanism via delayed arrays. I'm using a cut down 'clock calculus' from 
work on synchronous data-flow languages to guarantee that all outputs from an 
unzip operation are consumed in lock-step. Programs that don't do this won't be 
well typed. Forcing synchronicity guarantees that Waters's transform will apply 
to the program.

The Repa plugin will also do proper SIMD vectorisation for stream programs, 
producing the SIMD primops that Geoff recently added. Along the way it will 
brutally convert all operations on boxed/lifted numeric data to their unboxed 
equivalents, because I am sick of adding bang patterns to every single function 
parameter in Repa programs. 

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Ben Lippmeier

On 26/04/2013, at 2:15 PM, Johan Tibell wrote:

 Hi Ben,
 
 On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote:
 The Repa plugin will also do proper SIMD vectorisation for stream programs, 
 producing the SIMD primops that Geoff recently added. Along the way it will 
 brutally convert all operations on boxed/lifted numeric data to their 
 unboxed equivalents, because I am sick of adding bang patterns to every 
 single function parameter in Repa programs.
 
 How far is this plugin from being usable to implement a
 
 {-# LANGUAGE Strict #-}
 
 pragma for treating a single module as if Haskell was strict?

There is already one that does this, but I haven't used it.

http://hackage.haskell.org/package/strict-ghc-plugin

It's one of the demo plugins, though you need to mark individual functions 
rather than the whole module (which would be straightforward to add).

The Repa plugin is only supposed to munge functions using the Repa library, 
rather than the whole module.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-22 Thread Ben Lippmeier

On 22/04/2013, at 5:27 PM, Edward Z. Yang wrote:

 So, if I understand correctly, you're using the online/offline
 criterion to resolve non-directed cycles in pipelines?  (I couldn't
 tell how the Shivers paper was related.)

The online criteria guarantees that the stream operator does not need to 
buffer an unbounded amount of data (I think). 

I'm not sure what you mean by resolve non-directed cycles.

The Shivers paper describes the same basic approach of splitting the code for a 
stream operator in to parts that run before the loop/for each element of a 
loop/after the loop etc. Splitting multiple operators this way and then merging 
the parts into a single loop provides the concurrency required by the 
description in John Hughes's thesis.
 
Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Ben Lippmeier

On 22/04/2013, at 11:07 , Edward Z. Yang ezy...@mit.edu wrote:

 Hello all, (cc'd stream fusion paper authors)
 
 I noticed that the current implementation of stream fusion does
 not support multiple-return stream combinators, e.g.
 break :: (a - Bool) - [a] - ([a], [a]).  I thought a little
 bit about how might one go about implement this, but the problem
 seems nontrivial. (One possibility is to extend the definition
 of Step to support multiple return, but the details are a mess!)
 Nor, as far as I can tell, does the paper give any treatment of
 the subject.  Has anyone thought about this subject in some detail?


I've spent the last few months fighting this exact problem.

The example you state is one instance of a more general limitation. Stream 
fusion (and most other short-cut fusion approaches) cannot fuse a producer into 
multiple consumers. The fusion systems don't support any unzip-like function, 
where elements from the input stream end up in multiple output streams. For 
example:

unzip :: [(a, b)] - ([a], [b])

dup   :: [a] - ([a], [a])

The general problem is that if elements of one output stream are demanded 
before the other, then the stream combinator must buffer elements until they 
are demanded by both outputs.

John Hughes described this problem in his thesis, and gave an informal proof 
that it cannot be solved without some form of concurrency -- meaning the 
evaluation of the two consumers must be interleaved.

I've got a solution for this problem and it will form the basis of Repa 4, 
which I'm hoping to finish a paper about for  the upcoming Haskell Symposium.

Ben.










___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Ben Lippmeier

On 22/04/2013, at 12:23 , Edward Z. Yang ezy...@mit.edu wrote:

 I've got a solution for this problem and it will form the basis of
 Repa 4, which I'm hoping to finish a paper about for  the upcoming
 Haskell Symposium.
 
 Sounds great! You should forward me a preprint when you have something
 in presentable shape. I suppose before then, I should look at 
 repa-head/repa-stream
 to figure out what the details are?

The basic approach is already described in:

Automatic Transformation of Series Expressions into Loops
Richard Waters, TOPLAS 1991

The Anatomy of a Loop
Olin Shivers, ICFP 2005


The contribution of the HS paper is planning to be:
 1) How to extend the approach to the combinators we need for DPH
 2) How to package it nicely into a Haskell library.

I'm still working on the above...

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Nomyx 0.1 beta, the game where you can change the rules

2013-02-26 Thread Ben Lippmeier

On 27/02/2013, at 10:28 , Corentin Dupont corentin.dup...@gmail.com wrote:

 Hello everybody!
 I am very happy to announce the beta release [1] of Nomyx, the only game 
 where You can change the rules.

Don't forget 1KBWC: http://www.corngolem.com/1kbwc/

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The end of an era, and the dawn of a new one

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 3:56 , Simon Peyton-Jones wrote:

 Particularly valuable are offers to take responsibility for a
 particular area (eg the LLVM code generator, or the FFI).  I'm
 hoping that this sea change will prove to be quite empowering,
 with GHC becoming more and more a community project, more
 resilient with fewer single points of failure. 

The LLVM project has recently come to the same point. The codebase has become 
too large for Chris Lattner to keep track of it all, so they've moved to a 
formal Code Ownership model. People own particular directories of the code 
base, and the code owners are expected to review patches for those directories.

The GHC project doesn't have a formal patch review process, I think because the 
people with commit access on d.h.o generally know who owns what. Up until last 
week I think it was SPJ owns the type checker and simplifier, and SM owns 
everything else. :-)

At this stage, I think it would help if we followed the LLVM approach of having 
a formal CODE_OWNERS file in the root path of the repo explicitly listing the 
code owners. That way GHC HQ knows what's covered and what still needs a 
maintainer. The LLVM version is here [1].

Code owners would:
1) Be the go-to person when other developers have questions about that code.
2) Fix bugs in it that no-one else has claimed.
3) Generally keep the code tidy, documented and well-maintained.

Simon: do you want a CODE_OWNERS file? If so then I can start it. I think it's 
better to have it directly in the repo than on the wiki, that way no-one that 
works on the code can miss it.

I suppose I'm the default owner of the register allocators and non-LLVM native 
code generators.

Ben.

[1] http://llvm.org/viewvc/llvm-project/llvm/trunk/CODE_OWNERS.TXT?view=markup




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there REPA linear algebra routines? e.g. Eigenvalues?

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 3:18 , KC wrote:

 :)

Not apart from the matrix-matrix multiply code in repa-algorithms. If you 
wanted to write some I'd be happy to fold them into repa-algorithms.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Reaching Max Bolingbroke

2012-11-18 Thread Ben Lippmeier

On 19/11/2012, at 24:40 , Roman Cheplyaka wrote:

 For the last two months I've been trying to reach Max Bolingbroke via
 his hotmail address, github and linkedin, but did not succeed.
 
 Does anyone know if he's well? If someone could help by telling him that
 I'd like to get in touch, I'd appreciate that.

He wasn't at ICFP either. I think SPJ said he was in the middle of writing up 
his PhD thesis.

When I was doing mine I was out of circulation for a good 3 months.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Repa suitable for boxed arrays?...

2012-06-03 Thread Ben Lippmeier

On 03/06/2012, at 18:10 , Stuart Hungerford wrote:

 I need to construct a 2D array of a Haskell  data type (boxed ?)
 where each array entry value depends on values earlier in the same
 array (i.e. for entries in previous row/column indexes).

It should work. Use the V type-index for boxed arrays [1], so your array type 
will be something like (Array V DIM2 Float)

If you can't figure it out then send me a small list program showing what you 
want to do.


 Repa (V3.1.4.2) looks very powerful and flexible but it's still not
 clear to me that it will work with arbitrary values as I haven't been
 able to get any of the Wiki tutorial array creation examples to work
 (this is with Haskell platform 2012.2 pre-release for OS/X).

The wiki tutorial is old. It was written for the Repa 2 series, but Repa 3 is 
different. However I just (just) submitted a paper on Repa 3 to Haskell 
Symposium, which might help [2]

[1] 
http://hackage.haskell.org/packages/archive/repa/3.1.4.2/doc/html/Data-Array-Repa-Repr-Vector.html
[2] http://www.cse.unsw.edu.au/~benl/papers/guiding/guiding-Haskell2012-sub.pdf


Ben.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi runtime linker: fatal error (was Installing REPA)

2012-04-09 Thread Ben Lippmeier

On 08/04/2012, at 2:41 AM, Dominic Steinitz wrote:
 Hi Ben, Chris and Others,
 
 Thanks for your replies and suggestions. All I want to do is invert (well 
 solve actually) a tridiagonal matrix so upgrading ghc from the version that 
 comes with the platform seems a bit overkill. I think I will go with Chris' 
 suggestion for now and maybe upgrade ghc (and REPA) when I am feeling braver.
 
 Dominic.
 Sadly I now get this when trying to mulitply two matrices. Is this because I 
 have two copies of Primitive? I thought Cabal was supposed to protect me from 
 this sort of occurrence. Does anyone have any suggestions on how to solve 
 this?

You'll need to upgrade. Trying to support old versions of software is a lost 
cause.

I pushed Repa 3.1 to Hackage on the weekend. It has a *much* cleaner API. I 
can't recommend continuing to use Repa 2. You will just run into all the 
problems that are now fixed in Repa 3. 

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing REPA

2012-04-07 Thread Ben Lippmeier

On 07/04/2012, at 9:33 AM, Chris Wong wrote:

 On Sat, Apr 7, 2012 at 2:02 AM, Dominic Steinitz
 idontgetoutm...@googlemail.com wrote:
 Hi,
 
 I'm trying to install REPA but getting the following. Do I just install
 base? Or is it more complicated than that?
 
 Thanks, Dominic.
 
 I think the easiest solution is to just use an older version of Repa.
 According to Hackage, the latest one that works with base 4.3 is Repa
 2.1.1.3:
 
 $ cabal install repa==2.1.1.3

I've just pushed Repa 3 onto Hackage, which has a much better API than the 
older versions, and solves several code fusion problems. However, you'll need 
to upgrade to GHC 7.4 to use it. GHC 7.0.3 is two major releases behind the 
current version.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing REPA

2012-04-07 Thread Ben Lippmeier

On 07/04/2012, at 21:38 , Peter Simons wrote:

 Hi Ben,
 
 I've just pushed Repa 3 onto Hackage, which has a much better API
 than the older versions, and solves several code fusion problems.
 
 when using the latest version of REPA with GHC 7.4.1, I have trouble
 building the repa-examples package:
 
 | Building repa-examples-3.0.0.1...
 | Preprocessing executable 'repa-volume' for repa-examples-3.0.0.1...

 When I attempt to use repa 3.1.x, the build won't even get past the
 configure stage, because Cabal refuses these dependencies. Is that a
 known problem, or am I doing something wrong?

It is a conjunction of tedious Cabal and Hackage limitations, as well as my 
failure to actually upload the new repa-examples package.

Please try again now, and if that doesn't work email be the output of:

$ cabal update
$ cabal install repa-examples
$ ghc-pkg list

Thanks,
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Error in installing dph-examples on Mac OS X 10.7.3

2012-02-09 Thread Ben Lippmeier

On 10/02/2012, at 6:12 AM, mukesh tiwari wrote:

 Hello all 
 I am trying to install dph-examples on Mac OS X version 10.7.3 but getting 
 this error. I am using ghc-7.4.1.


This probably isn't DPH specific. Can you compile a hello world program with 
-fllvm?

Ben.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Ben Lippmeier

On 07/02/2012, at 7:00 AM, Clark Gaebel wrote:

 Using the OpenGL package on Hackage, how do I load a texture from an array?
 
 In the red book[1], I see their code using glGenTextures and glBindTexture, 
 but I can't find these in the documentation. Are there different functions I 
 should be calling?

The Gloss graphics library has texture support, and the code for drawing them 
is confined to this module:

http://code.ouroborus.net/gloss/gloss-head/gloss/Graphics/Gloss/Internals/Render/Picture.hs

Feel free to steal the code from there.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Ben Lippmeier

On 07/02/2012, at 2:40 PM, Clark Gaebel wrote:

 Awesome. Thanks!
 
 As a follow up question, how do I add a finalizer to a normal variable? 
 OpenGL returns an integer handle to your texture in graphics memory, and you 
 have to call deleteObjectNames on it. Is there any way to have this 
 automatically run once we lose all references to this variable (and all 
 copies)?

I don't know. I've only used ForeignPtrs with finalisers before [1].

One problem with these finalisers is that GHC provides no guarantees on when 
they will be run. It might be just before the program exits, instead of when 
the pointer actually becomes unreachable. Because texture memory is a scarce 
resource, I wouldn't want to rely on a finaliser to free it -- though I suppose 
this depends on what you're doing.

Ben.

[1] 
http://www.haskell.org/ghc/docs/latest/html/libraries/haskell2010-1.1.0.1/Foreign-ForeignPtr.html


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Ben Lippmeier

On 07/02/2012, at 2:50 PM, Clark Gaebel wrote:

 I would be running the GC manually at key points to make sure it gets cleaned 
 up. Mainly, before any scene changes when basically everything gets thrown 
 out anyways.


From the docs:

newForeignPtr :: FinalizerPtr a - Ptr a - IO (ForeignPtr a)Source
Turns a plain memory reference into a foreign pointer, and associates a 
finalizer with the reference. The finalizer will be executed after the last 
reference to the foreign object is dropped. There is no guarantee of 
promptness, however the finalizer will be executed before the program exits.


No guarantee of promptness. Even if the GC knows your pointer is unreachable, 
it might choose not to call the finaliser. I think people have been bitten by 
this before.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiling dph package with ghc-7.4.0.20111219

2012-01-22 Thread Ben Lippmeier

On 21/01/2012, at 22:47 , mukesh tiwari wrote:

 Hello all 
 I have installed ghc-7.4.0.20111219  and this announcement says that The 
 release candidate accidentally includes the random, primitive, vector and dph 
 libraries. The final release will not include them. I tried to compile  a 
 program 
 
 [ntro@localhost src]$ ghc-7.4.0.20111219 -c -Odph -fdph-par ParallelMat.hs 
 ghc: unrecognised flags: -fdph-par
 Usage: For basic information, try the `--help' option.
 [ntro@localhost src]$ ghc-7.2.1 -c -Odph -fdph-par ParallelMat.hs  

The -fdph-par flag doesn't exist anymore, but we haven't had a chance to update 
the wiki yet. Use -package dph-lifted-vseg to select the backend. You could 
also look at the cabal file for the dph-examples package to see what flags we 
use when compiling.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:

 * Alexander Solla alex.so...@gmail.com [2011-12-19 19:10:32-0800]
 * Documentation that discourages thinking about bottom as a 'value'.  It's
 not a value, and that is what defines it.
 
 In denotational semantics, every well-formed term in the language must
 have a value. So, what is a value of fix id?

There isn't one!

Bottoms will be the null pointers of the 2010's, you watch.
 
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

On 20/12/2011, at 9:06 PM, Thiago Negri wrote:
 There isn't one!
 
 Bottoms will be the null pointers of the 2010's, you watch.


 How would you represent it then?

Types probably. In C, the badness of null pointers is that when you inspect an  
int*  you don't always find an int. Of course the superior Haskell solution is 
to use algebraic data types, and represent a possibly exceptional integer by 
Maybe Int. But then when you inspect a Maybe Int you don't always get an .. 
ah.


 Would it cause a compiler error?


Depends whether you really wanted an Int or not.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

  In denotational semantics, every well-formed term in the language must
  have a value. So, what is a value of fix id?
 
 There isn't one!
 
 Bottoms will be the null pointers of the 2010's, you watch.
 
 This ×1000. Errors go in an error monad.
 
 Including all possible manifestations of infinite loops?

Some would say that non-termination is a computational effect, and I can argue 
either way depending on the day of the week.

Of course, the history books show that monads were invented *after* it was 
decided that Haskell would be a lazy language. Talk about selection bias.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

On 20/12/2011, at 21:52 , Gregory Crosswhite wrote:
 
 Some would say that non-termination is a computational effect, and I can 
 argue either way depending on the day of the week.
 
 *shrug*  I figure that whether you call _|_ a value is like whether you 
 accept the Axiom of Choice:  it is a situational decision that depends on 
 what you are trying to learn more about.

I agree, but I'd like to have more control over my situation. Right now we have 
boxed and lifted Int, and unboxed and unlifted Int#, but not the boxed and 
unlifted version, which IMO is usually what you want.


 Of course, the history books show that monads were invented *after* it was 
 decided that Haskell would be a lazy language. Talk about selection bias.
 
 True, but I am not quite sure how that is relevant to _|_...

I meant to address the implicit question why doesn't Haskell use monads to 
describe non-termination already. The answer isn't necessarily because it's 
not a good idea, it's because that wasn't an option at the time.


 Dec 20, 2011, в 14:40, Jesse Schalken jesseschal...@gmail.com написал(а):


 Including all possible manifestations of infinite loops?
 
 So... this imaginary language of yours would be able to solve the halting 
 problem?

All type systems are incomplete. The idea is to do a termination analysis, and 
if the program can not be proved to terminate, then it is marked as possibly 
non-terminating. This isn't the same as deciding something is *definitely* 
non-terminating, which is what the halting problem is about. This possibly 
non-terminating approach is already used by Coq, Agda and other languages.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (Repa) hFlush: illegal operation

2011-08-24 Thread Ben Lippmeier

On 25/08/2011, at 7:15 , Michael Orlitzky wrote:

 I'm using Repa to process a ton of MRI data. The basic process is,
 
  * Read in the data
  * Create a big 'ol data structure (grid) from it
  * Compute the output in parallel using 'traverse'
  * Write the output to file
 
 However, during the last step, I'm getting,
 
  $ ./bin/spline3 +RTS -N4
  spline3: output.txt: hFlush: illegal operation (handle is closed)


 read_values_1d :: FilePath - IO Values1D
 read_values_1d path = readVectorFromTextFile path

The implementation of the text IO functions is fairly naive, just using Haskell 
Strings etc under the covers. It may have problems with massive files. 

Can you send me some gzipped data of the same size/form as what you're using, 
or tell me where to download it? Even if your real source data is several GB in 
size, if you make a test file with mostly zeros it should gzip down to nothing.

Also, what is the native form of the data you are using? If it's in some 
standard binary form it may just be easier to write a native Repa loader for it.

Cheers,
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] logic and types

2011-07-31 Thread Ben Lippmeier

On 01/09/2011, at 8:48 , Patrick Browne wrote:

 Hi,
 Below are some questions about the logical interpretation of types and
 type classes.
 
 Thanks,
 Pat
 
 module J where
 type Id = String
 type Position = Integer
 data Person = Person Id Position deriving Show
 
 -- Is this an axiom at type level?
 class Pos a  where
   getPos :: a - Position

One way to think of a type class is that it defines a set of types. For 
example, Eq is the set of types that support equality, and Pos is the set of 
types that have a position. By giving the class definition you've defined what 
it means to be a member of that set, namely that members must support the 
'getPos' method, but without instances that set is empty. Whether you treat 
this bare class definition as an axiom depends on what you want from your 
logical system. 


 -- The :type command says
 -- forall a. (Pos a) = a - Position
 -- How do I write this in logic? (e.g. implies, and, or, etc)

Type systems are logical systems, there is no difference. Granted, some systems 
correspond to parts of others, but there is no single logical system that can 
be considered to be *the logic*. An equivalent question would be: how do I 
write this in functional programming?


 -- What exactly is being asserted about the type variable and/or about
 the class?

If you ignore the possibility that the function could diverge, then it says 
For all types a, given that 'a' is a member of the set Pos, and given a value 
of type 'a', then we can construct a Position.

Note that this doesn't guarantee that there are any types 'a' that are members 
of Pos. In Haskell you can define a type class, but not give instances for it, 
and still write functions using the type class methods.


 -- I am not sure of the respective roles of = and - in a logical context

Once again, which logic?. The type system that checks GHC core is itself a 
logical system. GHC core has recently ben rejigged so that type class 
constraints are just the types of dictionaries. In this case we have:

 forall (a: *). Pos a - a - Position

In DDC core, there are other sorts of constraints besides type class 
constraints. In early stages of the compiler we encode type class constraints 
as dependent kinds, so have this:

 forall (a: *). forall (_: Pos a). a - Position.

Both are good, depending on how you're transforming the core program.


 -- Is the following a fact at type level, class level or both?
 instance Pos Person where
  getPos (Person i p) = p

If you take the GHC approach, a type class declaration and instance is 
equivalent to this:

data Pos a 
 = PosDict { getPos :: Pos a - a - Position }

dictPosPerson :: Pos Person
dictPosPerson
 = PosDict (\d (Person i p) - p)

From this we've got two facts:
 Pos :: * - *
 dictPosPerson :: Pos Person

You could interpret this as:
 1) There is a set of types named Pos
 2) There is an element of this set named Person.


 -- Is it the evaluation or the type checking that provides a proof of
 type correctness?
 -- getPos(Person 1 2)

The type inferencer constructs a proof that a Haskell source program is well 
typed. It does this by converting it to GHC core, which is a formal logical 
system. The core program itself is a proof that there is a program which has 
its type. The type checker for GHC core then checks that this proof is valid.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Implementors Workshop talk proposals due this Friday!

2011-07-20 Thread Ben Lippmeier
Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell Implementors Workshop 2011, Second CFT

2011-07-04 Thread Ben Lippmeier
Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can it be proven there are no intermediate useful type classes between Applicative Functors Monads?

2011-06-05 Thread Ben Lippmeier

On 06/06/2011, at 5:51 , KC wrote:

 If new intermediate classes crop up then there would be no point in fixing
 
 class (Applicative m) = Monad m where
 
 since it would have to be changed if new intermediate classes are found.
 
 I realize non-existence proofs are hard.

Not as hard as formalising useful.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is fusion overrated?

2011-05-18 Thread Ben Lippmeier

On 18/05/2011, at 15:55 , Roman Cheplyaka wrote:
 Of course I don't claim that fusion is useless -- just trying to
 understand the problem it solves. Are we saving a few closures and cons
 cells here?

And thunk allocations, and thunk entries. Entering a thunk costs upwards of 20 
cycles, while performing a single addition should only cost one. Imagine every 
thunk entry is a function call. You don't want to call a whole function just to 
add two numbers together.

Those few closures and cons cells can be surprisingly expensive when compared 
to native ALU instructions on a modern machine.

Ben.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] repa Shape help

2011-05-09 Thread Ben Lippmeier

On 09/05/2011, at 15:31 , bri...@aracnet.com bri...@aracnet.com wrote:

 main = do
  let x = A.fromList (AS.shapeOfList [2, 2]) ([1.0, 2.0, 3.0, 4.0]::[Double])
  putStrLn $ show x
 
 test_repa.hs:10:13:
Ambiguous type variable `sh' in the constraint:
  `Shape sh' arising from a use of `show' at test_repa.hs:10:13-18
Probable fix: add a type signature that fixes these type variable(s)
 Failed, modules loaded: none.
 
 After much staring at the type signatures I finally figured out that adding a 
 type annotation to x of :
 
  :: Array DIM2 Double
 
 would fix the problem, but I'm not completely clear as to why.

Because the GHC type system doesn't (yet) know that applying shapeOfList to a 
two element array should yield a DIM2.


 after all fromList is typed:
 
 (Shape sh, Elt a) = sh - [a] - Array sh a
 
 Since it knows [a] is [Double] and sh must be - well I'm not really clear on 
 what sh is supposed to be.  therein lies my problem.  Although it does seem 
 that sh can be darn near anything, which is probably why it was ambiguous.

Shape is dimensionally of the array. The only valid choices for sh are DIM1, 
DIM2, DIM3 etc for 1 dimensional, 2 dimensional etc arrays.


 At one point I had tried something like (2 :. 2) and got a whole host of 
 errors for that too, except that DIM2 is defined in exactly that way, so it's 
 not at all obvious why that didn't work.

Try (Z :. 2 :. 3). This is basically a list containing the column and row 
lengths. Similar to (3 : 2 : []), except that the list extends to the left 
instead of the right. The Z constructor is equivalent to [].


Cheers,
Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python is lazier than Haskell

2011-04-29 Thread Ben Lippmeier

On 29/04/2011, at 6:08 PM, Malcolm Wallace wrote:

 On 29 Apr 2011, at 05:38, Ben Lippmeier b...@ouroborus.net wrote:
 
 Laziness at the value level causes space leaks, 
 
 This is well-worn folklore, but a bit misleading.  

:-) Like permanent markers in the hands of children causes suffering. It's not 
a tautology, but an overgeneralisation that holds more often than not. 


 If anything, I think there is observation bias: lazy programmers have good 
 tools for identifying, finding, and removing leaks.  Others do not.

Sharp tools well honed through years of hunting them down. If only they were 
never there in the first place.


I don't disagree with you. My original comment was more bait than anything else.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python is lazier than Haskell

2011-04-28 Thread Ben Lippmeier

On 27/04/2011, at 7:30 PM, Henning Thielemann wrote:

  If Haskell is great because of its laziness,
   then Python must be even greater,
   since it is lazy at the type level.

Laziness at the value level causes space leaks, and laziness at the type level 
causes mind leaks. Neither are much fun.

When people start wanting laziness at the kind level we'll have to quarantine 
them before the virus spreads...

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] CFT -- Haskell Implementors' Workshop 2011

2011-04-19 Thread Ben Lippmeier

Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using DPH

2011-04-12 Thread Ben Lippmeier

On 12/04/2011, at 7:32 PM, Wilfried Kirschenmann wrote:

 Hi,
 
 In order to do a performance comparison beetween different approaches for our 
 application, I make different implementation of a simple example (computing 
 the norm of a vector expression.
 I rely on Repa to do this. 
 However, when I tried to build the parallel version (-threaded -fvectorise 
 -rtsopts), I got an error specifying that dph-par was not available. Indeed, 
 It wasn't.

Repa and DPH are different projects. The compilation mechanism and approach to 
parallelism is quite different between them. You only need -fvectorise to turn 
on the vectoriser for DPH code. You don't need (or want) -fvectorise for Repa 
programs. DPH is also still at the research prototype stage, and not yet at a 
point where you'd try to use it for anything real.

With your example code, you also need to use R.force at appropriate points, and 
add matches against @(Array _ [Region RangeAll (GenManifest _)]). The reasons 
for both of these are explained in [1]. Hopefully the second will be fixed by a 
subsequent GHC release. You must also add {-# INLINE fun #-} pragmas to 
polymorphic functions or you will pay the price of dictionary passing for the 
type class overloading.


With the attached code:

desire:tmp benl$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.0.3

desire:tmp benl$ ghc-pkg list |grep repa
repa-2.0.0.2
repa-algorithms-2.0.0.2
repa-bytestring-2.0.0.2
repa-io-2.0.0.2

desire:tmp benl$ ghc -rtsopts -threaded -O3 -fllvm -optlo-O3 -fno-liberate-case 
--make haskell.hs -XBangPatterns -fforce-recomp

desire:tmp benl$ /usr/bin/time ./haskell
[3.3645823e12]
72518800
6.62 real 6.39 user 0.22 sys


This runs but doesn't scale with an increasing number of threads. I haven't 
looked at why. If all the work is in R.sum then that might be the problem -- I 
haven't put much time into optimising reductions, just maps and filters.

Cheers,
Ben.

[1] http://www.cse.unsw.edu.au/~benl/papers/stencil/stencil-icfp2011-sub.pdf




haskell.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using DPH

2011-04-12 Thread Ben Lippmeier

On 12/04/2011, at 11:50 PM, Wilfried Kirschenmann wrote:

 surprisingly, when removing the R.force from the code you attached,
 performances are better (speed-up=2). I suppose but I am not sure that
 this allow for loop fusions beetween the R.map ant the R.sum.
 
 I use ghc 7.0.3, Repa 2.0.0.3 and LLVM 2.9.
 
 By the end, the performances with this new version (0.48s) is 15x
 better than my original version (6.9s)
 However, the equivalent sequential C code is still 15x better (0.034s).
 
 This may indeed be explained by the fact that all computations are
 performed inside the R.sum.

Yeah, the Repa fold and sum functions just use the equivalent Data.Vector ones. 
They're not parallelised and I haven't looked at the generated code. I'll add a 
ticket to the trac to fix these, but won't have time to work on it myself  in 
the near future.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type Directed Name Resolution

2010-11-11 Thread Ben Lippmeier

On 12/11/2010, at 2:26 AM, Malcolm Wallace wrote:

 The point is that refusing something you can have now (though
 of course it's an open question whether TDNR is something we can have
 now) out of fear that it'll prevent you getting something better
 later is speculative and often backfires.
 
 I think we are very far from having TDNR now.  It is really quite 
 complicated to interleave name resolution with type checking in any compiler. 
  So far, we have a design, that's all, no implementation.  We also have 
 (several) designs for proper record systems.

Disciple has TDNR, and there is an implementation in DDC. It is a bit 
complicated, mainly because you can't determine the call graph of the program 
before starting inference. In ML style inference you're supposed to 
let-generalise groups of recursive bindings together, but for TDNR you can only 
determine what is recursive once you've resolved the names (which depends on 
the types, which you need to infer).

The algorithm is described starting at page 168 in my thesis here: 
http://www.cse.unsw.edu.au/~benl/papers/thesis/lippmeier-impure-world.pdf

Disciple doesn't have type functions or associated types though. I think it'd 
be nicer for GHC if we could leverage some of the other extensions, as 
suggested in Mark Lentczner's post.

Ben.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Paralelism and Distribution in Haskell

2010-09-07 Thread Ben Lippmeier

On 07/09/2010, at 6:11 PM, Johannes Waldmann wrote:

 Mathew de Detrich deteego at gmail.com writes:
 
 Haskell is still by far one of the best languages 
 to deal with concurrency/parallelism. 
 
 Sure, I fully agree.
 
 I am using concurrency (with explicit forkIO, communication via Chan) 
 a lot (my Haskell application controls several external constraint solvers).
 
 For parallelism, I'm just missing some benchmark code
 that I can run on my machine (i7 CPU, GTX 295 GPU, ghc-6.12.3)
 more or less out-of-the-box and that will impress my students and myself.
 (That is, get a speed-up of 8, or 480, without the program
 looking 8 times (or 480 times) more ugly...)
 

The matrix-matrix multiplication benchmark from the Repa library does this.

Check out 
  http://www.cse.unsw.edu.au/~benl/papers/repa/repa-icfp2010.pdf
  http://hackage.haskell.org/package/repa
  http://hackage.haskell.org/package/repa-examples

Though be warned you must use a recent GHC head build to get good performance. 
After GHC 7.0 is out (in a few weeks) we'll be able to release a properly 
stable version.

Note that speedup is an important consideration, but not the end of the 
story. It's harder to find a benchmark that displays all of nice code + speedup 
+ good absolute performance. The first and last of these tend not to be friends.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help me evangelize haskell.

2010-09-05 Thread Ben Lippmeier

On 05/09/2010, at 2:38 AM, Michael Litchard wrote:

 I'll be starting a new job soon as systems tool guy. The shop is a
 perl shop as far as internal automation tasks go. But I am fortunate
 to not be working with bigots. If they see a better way, they'll take
 to it. So please give me your best arguments in favor of using haskell
 for task automation instead of perl, or awk or any of those scripting
 lanugages.

Try to avoid religious arguments like by using Perl you're living in a state 
of sin, and focus on look how much easier it is to do X in Haskell. 

Grandiose, hand-wavy assertions like strong typing leads to shorter 
development times and more reliable software don't work on people that haven't 
already been there and done that. When you try to ram something down someone's 
throat they tend to resist. However, if you can provide something tasty and 
appealing they'll eat it themselves. Write a nice program, show it to your Perl 
programmer, and if they also think it's nice -- then you've already won.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DpH/repa cache locality

2010-07-18 Thread Ben Lippmeier

The difficulty with optimising for cache effects is that you're effectively 
introducing sequential dependencies between elements of the array you're 
processing. 

To say this another way: If you can evaluate the array elements in any order 
then you can evaluate them in parallel. Adding restrictions like element  i  
must be processed before element  i+1 can improve case usage but also 
restricts the evaluation order, and makes it less obvious how to parallelise 
the code.

For Repa, I think we'll end providing primitive operators for doing things like 
2d image convolutions. Moving from a linear image convolution, where the all 
the pixels in one row are processed before moving onto the next, to a blocked 
based convolution is really a change in algorithm -- and not something I'd 
expect a general purpose compiler optimisation to do.

Ben.


On 13/07/2010, at 9:49 , Gregory Crosswhite wrote:

 Hey everyone,
 
 Just out of curiosity, what work is being done in the data parallel
 haskell / repa projects regarding cache locality?  The reason I am
 asking is because, as I understand it, the biggest bottleneck on today's
 processors are cache misses, and the reason why optimized
 platform-specific linear algebra libraries perform well is because they
 divide the data into chunks that are optimally sized for the cache in
 order to maximize the number of operations performed per memory access. 
 I wouldn't expect data parallel haskell/repa to automatically know what
 the perfect chunking strategy should be on each platform, but are there
 any plans being made at all to do something like this?
 
 (To be explicit, this isn't meant as a criticism;  I'm just curious and
 am interested in seeing discussion on this topic by those more
 knowledgeable than I.  :-) )
 
 Thanks!
 Greg
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lambda calculus and equational logic

2010-07-14 Thread Ben Lippmeier

On 14/07/2010, at 6:26 , Patrick Browne wrote:

 Thanks for your clear and helpful responses.
 I am aware that this question can lead to into very deep water.
 I am comparing Haskell with languages based on equational logic (EL)
 (e.g. Maude/CafeOBJ, lets call them ELLs).  I need to identify the
 fundamental distinction between the semantics of ELLs and Haskell. The
 focus of my original question was just the purely functional, side
 effect free, part of Haskell.

If you ignore anything to do with the IO monad (or ST), then all of Haskell can 
be desugared to (untyped) call-by-name/need lambda calculus. If you stick with 
Haskell98 then you can desugar it to the rank-2 fragment of System-F + 
algebraic data types + case expressions + appropriate constants and primops. 
This is generally regarded as the Haskell Kernel Language, which is mentioned 
but explicitly not defined in the language standard.


 The relationship between the denotational and the proof theoretic
 semantic is important for soundness and completeness. Which was sort of
 behind my original question.
 
 Would it be fair to say
 1) Lambda calculus provides the operational semantics for Haskell

Notionally yes, but practically no. AFAIC there isn't a formal semantics for 
Haskell, but there is for fragments of it, and for intermediate representations 
like System-Fc (on which GHC is based). There are also multiple lambda calculi, 
depending on which evaluation mechanism you use.

The point I was trying to make in the previous message is what while Haskell 
includes the IO monad, people insist on calling the whole language purely 
functional and side effect free, which is murky territory. Sabry's What is 
a Purely Functional Language shows that unrestricted beta-reduction is not 
sound in a simple monadic language using a pass-the-world implementation -- 
though Wouter's paper gives a cleaner one. Another paper that might help is 
Sondergaard and Sestoft's highly readable Referential Transparency, 
Definiteness and Unfoldability.


 2) Maybe equational logic provides the denotational semantics.
 3)I am not sure of proof theoretic semantic for Haskell.
  The Curry-Howard correspondence is a proof theoretic view but only at
  type level.
 
 Obviously, the last three points are represent my efforts to address
 this question. Hopefully the café can comment on the accuracy of these
 points.

My (limited) understanding of Maude is that rewrites can happen at any point in 
the term being reduced. This is different from, say, the semantics of 
call-by-name lambda calculus which has a specific evaluation order. In Haskell 
it's no problem to pass a diverging expression to some function, which might 
store it in a data structure, but then discard it later. However, when rewrites 
can happen at any point in the term being reduced, if any part of it diverges 
then the whole thing does. This is just from skimming slides for Maude talks 
though...

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lambda calculus and equational logic

2010-07-13 Thread Ben Lippmeier

On 13/07/2010, at 20:47 , Brent Yorgey wrote:

 On Wed, Jul 07, 2010 at 09:56:08AM +0100, Patrick Browne wrote:
 Hi,
 In Haskell what roles are played by 1)lambda calculus and 2) equational
 logic? Are these roles related?
 
 Hopefully this question can be answered at a level suitable for this forum.
 
 Since no one else has responded I'll take a quick stab at answering,
 and others can fill in more information as appropriate, or ask further
 questions.
 
  2) Haskell is able to embrace equational logic because of its
 insistence on purity: in a Haskell program (leaving aside for the
 moment things like seq and unsafePerformIO) you can always
 replace equals by equals (where equality is the normal
 beta-equality for System F omega, plus definitional equality
 introduced by Haskell declarations) without changing the
 semantics of your program. So the story of an equational logic
 for System F omega and the story of evaluating Haskell programs
 are to a large extent the same story.

Replacing equals by equals usually doesn't change anything. 

What kind of equality do you use for  getChar :: IO Char ?


  Coming up with equational
 logics corresponding to most imperative languages (or even a
 non-pure functional language like OCaml) is massively complicated
 by the need to keep track of an implicit state of the world due
 to the presence of side effects.

By massively complicated you mean harder than the simplest case. Haskell's 
do-notation makes the state of the world implicit, and performing the 
desugaring makes it explicit again -- but then that state isn't really the 
state of the word...

Sorry or my heckling. You gave a fine answer, to the standard question. 
However, I propose mandating that all such questions asked on the 
haskell-beginners list are answered with Haskell's purity solves everything 
-- but  on haskell-cafe they should get Haskell's purity solves everything, 
but read Sabry's paper on What is a Purely Functional Language, because it's 
really more subtle than that.

Cheers,
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multidimensional Matrices in Haskell

2010-07-11 Thread Ben Lippmeier

I've found using Data.Vector works fine for this, just write an indexing 
function to handle the multiple dimensions.

The gloss-examples package has a nice graphical demo of Conway's game of life 
that uses Vector. Gloss is specifically designed for beginners, so no monads 
required.

The code for the demo is at:
  http://code.haskell.org/gloss/gloss-stable/examples/Conway/

and the gloss homepage is at:
  http://trac.haskell.org/gloss/

Ben.


On 08/07/2010, at 12:08 AM, Mihai Maruseac wrote:

 Hi,
 
 A friend of mine wanted to do some Cellular Automata experiments in
 Haskell and was asking me what packages/libraries are there for
 multidimensional matrices. I'm interested in both immutable and
 mutable ones but I don't want them to be trapped inside a monad of any
 kind.
 
 Any hints?
 
 -- 
 MM
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Relating generated asm to Core

2010-06-23 Thread Ben Lippmeier

Hi Rami, 
You'll want to first look at the Cmm (C minus minus) code, which is the 
imperative intermediate language that GHC uses before conversion to assembly.

Do something like ghc -c Whatever.hs -ddump-cmm. The names of the blocks of 
cmm code should match the ones in core. If not, then you might want to also 
look at the output of -ddump-stg.

Keep in mind that the assembly output by GHC will not look like that output by, 
say, GCC, because of the lazy evaluation method. You'll want to try and ignore 
the vast swathes thunking code and just focus on the inner loops of your 
particular algorithm.

Ben.



On 23/06/2010, at 1:35 PM, Rami Mukhtar wrote:

 Hi,
  
 Can anyone tell me a way to identify the generated assembly (as found in the 
 intermediate files produced by GHC) corresponding to a particular fragment of 
 Core code.  
  
 Thanks,
 
 Rami
 
 The information in this e-mail may be confidential and subject to legal 
 professional privilege and/or copyright. National ICT Australia Limited 
 accepts no liability for any damage caused by this email or its attachments.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stone age programming for space age hardware?

2010-06-07 Thread Ben Lippmeier

On 07/06/2010, at 3:05 AM, Michael Schuerig wrote:

 I have a hunch that the real restrictions of this kind of software are 
 not concerned with fixed memory, iterations, whatever, but rather with 
 guaranteed bounds. If that is indeed the case, how feasible would it be 
 to prove relevant properties for systems programmed in Haskell?

For full Haskell that includes laziness and general recursion: not very. 
Proving properties about the values returned by functions is one thing, but 
giving good guaranteed upper bounds to the time and space used by an arbitrary 
program can be very difficult.

See for example:

J ̈orgen Gustavsson and David Sands, Possibilities and limitations of 
call-by-need space improvement, ICFP 2001: Proc. of the International 
Conference on Functional Programming, ACM, 2001, pp. 265–276.

Adam Bakewell and Colin Runciman, A model for comparing the space usage of lazy 
evaluators, PPDP 2000: Proc. of the International Conference on Principles and 
Practice of Declarative Pro- gramming, ACM, 2000, pp. 151–162.

Hans-Wolfgang Loidl. Granularity in Large-Scale Parallel Functional 
Programming. PhD Thesis. Department of Computing Science, University of 
Glasgow, March 1998.


I expect future solutions for this domain will look more like the Hume (family 
of) languages [1]. They give several language levels, and can give stronger 
bounds for programs using less language features.

[1] http://www-fp.cs.st-andrews.ac.uk/hume/index.shtml


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] Work on Video Games in Haskell

2010-05-26 Thread Ben Lippmeier

On 27/05/2010, at 9:01 AM, Edward Kmett wrote:
 While we can all acknowledge the technical impossibility of identifying the 
 original source language of a piece of code...


Uh,

desire:tmp benl$ cat Hello.hs
main = putStr Hello

desire:tmp benl$ ghc --make Hello.hs

desire:tmp benl$ strings Hello | head
Hello
base:GHC.Arr.STArray
base:GHC.Arr.STArray
base:GHC.Classes.D:Eq
base:GHC.Classes.D:Eq
failed to read siginfo_t
 failed: 
Warning: 
select
buildFdSets: file descriptor out of range

...




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] Work on Video Games in Haskell

2010-05-26 Thread Ben Lippmeier

Objects in the heap also have a very regular structure. They all have code 
pointers as their first word, which point to info tables that also have a 
regular structure [1]. GHC produced code is probably one of the easiest to 
identify out of all compiled languages...

http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage/HeapObjects

Ben.


On 27/05/2010, at 1:15 PM, Daniel Peebles wrote:

 Next up, binary obfuscation! Apple already uses these extensively in their 
 Fairplay code. Surely it isn't against the rules (yet?) to apply them to your 
 program before submitting it to the store? :P
 
 On Wed, May 26, 2010 at 11:01 PM, Ben Lippmeier b...@ouroborus.net wrote:
 
 On 27/05/2010, at 9:01 AM, Edward Kmett wrote:
  While we can all acknowledge the technical impossibility of identifying the 
  original source language of a piece of code...
 
 
 Uh,
 
 desire:tmp benl$ cat Hello.hs
 main = putStr Hello
 
 desire:tmp benl$ ghc --make Hello.hs
 
 desire:tmp benl$ strings Hello | head
 Hello
 base:GHC.Arr.STArray
 base:GHC.Arr.STArray
 base:GHC.Classes.D:Eq
 base:GHC.Classes.D:Eq
 failed to read siginfo_t
  failed:
 Warning:
 select
 buildFdSets: file descriptor out of range
 
 ...
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Australian Hackathon

2010-03-16 Thread Ben Lippmeier

On 16/03/2010, at 10:45 PM, Alex Mason wrote:

 I'd suggest focusing on core Haskell infrastructure, like compilers and 
 tools, rather than individual libraries -- though it all depends on who 
 wants to come along.
 
 Basically, we're just aiming to get a bunch of like minded people together, 
 who want to hack on projects with some other people, possibly with the 
 authors of the projects (for example, I might want help to work on the 
 Accelerate library that Manuel, Gabriele and Sean have been working on, and 
 being able to talk to them directly to find out how the code is all laid out 
 and organised would be much much easier than trying to do the same thing over 
 IRC for example.)

I meant that with these systems there's more of a chance that people have past 
experience with them, so you can hit the ground running, but it's only a 
suggestion.


 You'll also want to consider how a proposed OzHaskell might align and/or 
 combine with other events such as SAPLING[1] and fp-syd[2]. There is also 
 the ICFP programming contest in a few months that many people will be 
 interested in...
 
 The more people we can get in touch with, the better, we'd like to hear from 
 all these groups, if for no better reason than to get the word out that such 
 a thing might be happening... maybe, and to help gauge interest. The more 
 people that know, the more pressure we can bring upon ourselves to get 
 something organised.
 
 I was planning on forwarding this onto the FP-Syd list, but maybe I could ask 
 you to do that Ben? These mailing list things are before my time, and I 
 wouldn't have a clue what to do -_-

You seem to have worked out haskell-cafe, so it can't be that hard!

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Australian Hackathon

2010-03-15 Thread Ben Lippmeier

On 16/03/2010, at 4:28 PM, Ivan Miljenovic wrote:

 * A plotting library using Ben's newly released Gloss library (for
 people who can't or won't install Gtk2Hs to get Chart working; Alex
 Mason is interested in this)
 * Various graph-related project (graphviz, generic graph class, etc.;
 this assumes someone else apart from me cares about this stuff)
 * Hubris if Mark Wotton comes along
 * LLVM if David Terei comes

I'd suggest focusing on core Haskell infrastructure, like compilers and tools, 
rather than individual libraries -- though it all depends on who wants to come 
along.


 So, at least as an initial listing, we'd need to have a listing of:
 1) Who's interested
 2) What dates are good
 3) What projects people want to work on
 4) Where we can host this

You'll also want to consider how a proposed OzHaskell might align and/or 
combine with other events such as SAPLING[1] and fp-syd[2]. There is also the 
ICFP programming contest in a few months that many people will be interested 
in...

Hosting is not a problem. If people want to come to Sydney then I'm sure we can 
organise a room at UNSW. 

Ben.


[1] http://plrg.ics.mq.edu.au/projects/show/sapling
[2] http://groups.google.com/group/fp-syd
[3] http://www.icfpconference.org/contest.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Fast Haskell Parser

2010-03-10 Thread Ben Lippmeier

Hi John, 
Doing a Google search for haskell parser returns the following link as its 
first result. That's the parser that GHC uses.

  http://www.haskell.org/happy/

You could also check out the following:

  http://www.haskell.org/haskellwiki/Parsec
  http://hackage.haskell.org/package/attoparsec

This would also be a perfect question to ask on the haskell-cafe mailing list...

Cheers,
Ben.


On 11/03/2010, at 10:39 AM, John D. Earle wrote:

 I was thinking of ways to create an efficient Haskell parser. My initial 
 thinking was to write a top down parser in Haskell, but if you want speed a 
 table driven approach may make greater sense.
 
 Due to the existence of build bots there is a certain degree of compliancy 
 concerning build times. I feel that convenience is an important factor. It 
 should be convenient to build the source. Build bots make an assumption, 
 namely the existence of a formal infrastructure. I believe that it should be 
 possible to build something from source casually.
 
 This is a less demanding goal than high performance incremental builds. It 
 would be nice to out perform make files because if you fail to do this, can 
 it really be said that you are making progress? Correctness appears to be a 
 low priority among computer programmers. That said, it may be worth investing 
 some time in advance to figuring out how to best achieve both objectives, 
 namely correctness and performance. Who knows skills acquired in one project 
 may be useful in another and performance is usually welcome.
 
 So my question is, What sort of tools and methodologies exist in Haskell to 
 create high performance parsers? My impression is the speed at which the 
 parser performs its task is not the bottle-neck, but the parser might as well 
 be designed to be efficient so as not to be intellectually lazy. It may even 
 turn out that the parser may need to be efficient merely to compensate for 
 the spawn of correctness, namely slow builds. 
 ___
 Cvs-ghc mailing list
 cvs-...@haskell.org
 http://www.haskell.org/mailman/listinfo/cvs-ghc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the deal with Clean?

2009-11-03 Thread Ben Lippmeier

David Leimbach wrote:
Disciplined Disciple might be interesting to look at here too, but i'm 
not sure I'd deploy anything with DDC just yet :-)
:) Nor would I (and I wrote most of it). I think the approach is right, 
but the compiler itself is still in the research prototype stage.


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the deal with Clean?

2009-11-03 Thread Ben Lippmeier

David Leimbach wrote:
I have to admit, the first time I hit the wiki page for DDC I said to 
myself Self, this sounds crazy complicated.  Then I read part of the 
PDF (your thesis I believe) about Region Types on the bus ride to work 
and thought.  Gee I think I scared myself off too quickly.


Uniqueness typing is quite interesting in Clean, but to control 
aliasing, like really *control* aliasing, that's just far out man.


So I still have to wrap my head around why this isn't going to get 
completely out of control and see why it's all safer than just 
writing C code but I must say the attention I will be paying to DDC 
has just gone quite a bit up.


:) A correct C program is just as safe as a correct Haskell/Disciple 
program.


If you're using destructive update then aliasing, side effects and 
mutability all start to matter. It might look complicated when you 
reflect all these things in the type system, but you're really just 
getting a handle on the inherent complications of the underlying program.


I suppose the trick is to be able to ignore said complications when you 
just don't care, or they're not relevant for your particular problem...


Ben.







___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-13 Thread Ben Lippmeier

Heinrich Apfelmus wrote:

Actually you need five versions: The pure version, the pre-order
traversal, the post-order traversal, the in-order traversal, and the
reverse in-order traversal.  And that is just looking at syntax.  If you
care about your semantics you could potentially have more (or less).



Exactly! There is no unique choice for the order of effects when lifting
a pure function to an effectful one.

For instance, here two different versions of an effectful  map :

   mapM f [] = return []
   mapM f (x:xs) = do
   y  - f x
   ys - mapM f xs
   return (y:ys)

   mapM2 f [] = return []
   mapM2 f (x:xs) = do
   ys - mapM2 f xs
   y  - f x
   return (y:ys)

Which one will the DCC compiler chose, given

   map f [] = []
   map f (x:xs) = f x : map f xs
  

Disciple uses default strict, left to right evaluation order. For
the above map function, if f has any effects they will be executed
in the same order as the list elements.


? Whenever I write a pure higher order function, I'd also have to
document the order of effects.
  


If you write a straight-up higher-order function like map above,
then it's neither pure or impure. Rather, it's polymorphic in the
effect of its argument function. When effect information is
added to the type of map it becomes:


map :: forall a b %r1 %r2 !e1
   .  (a -(!e1) b) - List %r1 a -(!e2) List %r2 b
   :- !e2 = !{ !Read %r1; !e1 }


Which says the effect of evaluating map is to read the list and
do whatever the argument function does. If the argument function
is pure, and the input list is constant, then the application
of map is pure, otherwise not.

If you want to define an always-pure version of map, which
only accepts pure argument functions then you can give it the
signature:

pureMap :: (a -(!e1) b) - List %r1 a - List %r2 b
   :- Pure !e1, Const %r1

.. and use the same definition as before.

Note that you don't have to specify the complete type in the
source language, only the bits you care about - the rest is
inferred.

Now if you try to pass pureMap an impure function, you get
an effect typing error.

Adding purity constraints allows you to write H.O functions
without committing to an evaluation order, so you can change
it later if desired.


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Derek Elkins wrote:

The compiler is supposed to be able to reorder non-strict
evaluation to do optimisations, but that can't be done if effects
could happen.



There's nothing special about non-strict evaluation that makes the
antecedent true.  Replacing non-strict with strict gives just as
much of a valid statement.  It is purity that allows (some) reordering
of evaluation.
  

Here are two effectful statements that can safely be reordered.

 print foo
 x := 5


here are two more

 y := 2
 z := 3

(provided y and z don't alias)


Purity allows some reordering of evaluation, so does knowing that
two effectful computations won't interfere.


Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Dan Doel wrote:
For instance: what effects does disciple support? Mutation and IO? 

You can create your own top-level effects which interfere
will all others, for example:

effect !Network;
effect !File;

readFile :: String -(!e) String
:- !e = !File

Now any function that calls readFile will also have a !File effect.

What if I 
want non-determinism, or continuations, etc.? How do I as a user add those 
effects to the effect system, and specify how they should interact with the 
other effects? As far as I know, there aren't yet any provisions for this, so 
presumably you'll end up with effect system for effects supported by the 
compiler, and monads for effects you're writing yourself.
  

Yep.

In Disciple, a computation has an effect if its evaluation cannot
safely be reordered with others having the same effect. That is,
computations have effects if they might interfere with others.

One of the goals of the work has been to perform compiler
optimisations without having to use IO-monad style state threading.
IO is very coarse grained, and using the IO monad for everything
tends to introduce more data-dependencies than strictly needed, which
limits what optimisations you can do.

Non-determinism and continuations are tricker things than the simple
notion of effects-as-interference, which I haven't got a good
solution for.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Dan Doel wrote:
Off hand, I'd say I don't write foo and fooM versions of functions much in 
actual programs, either. Such duplication goes into libraries...

It would be ok if the duplication /was/ actually in the libraries,
but often it's not.

Note the lack of Data.Map.mapM and Data.Map.foldM. Want to apply a monadic
computation to all the elements of a Data.Map? Convert it to a list and 
back..


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Dan Doel wrote:

On Wednesday 12 August 2009 11:46:29 pm Ben Lippmeier wrote:
  

Dan Doel wrote:


Off hand, I'd say I don't write foo and fooM versions of functions much
in actual programs, either. Such duplication goes into libraries...
  

It would be ok if the duplication /was/ actually in the libraries,
but often it's not.

Note the lack of Data.Map.mapM and Data.Map.foldM. Want to apply a monadic
computation to all the elements of a Data.Map? Convert it to a list and
back..



Or use Data.Traversable.mapM and Data.Foldable.foldM.

  

Ah thanks, I didn't notice the Traversable instance. There are
other higher-order functions in Data.Map that don't seem to have
monadic counterparts though, like insertWith, unionsWith, updateAt ...

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution

2009-07-29 Thread Ben Lippmeier


On 28/07/2009, at 6:41 AM, John Dorsey wrote:

I'm assuming that name resolution is currently independent of type
inference, and will happen before type inference.  With the proposal  
this is
no longer true, and in general some partial type inference will have  
to

happen before conflicting unqualified names are resolved.

My worry is that the proposal will require a compliant compiler to
interweave name resolution and type inference iteratively.

To my untrained eye it looks complicated and invasive, even without  
the
mutually recursive case.  Can anyone shed light on whether this  
would be a

problem for, say, GHC?



My experimental compiler DDC [1] implements TDNR almost exactly as  
given on the Haskell' wiki.


Yes, you have to interweave name resolution with type inference,  
because there is no way to compute the binding dependency graph/call  
graph before type inference proper. This is discussed in section 3.5  
of my thesis [2] (which is currently under examination). For DDC I  
used a constraint based inference algorithm to compute the binding  
dependency graph on the fly, but I don't know how easy it would be  
to retrofit this method into GHC.


Cheers,
Ben.


[1] http://www.haskell.org/haskellwiki/DDC
[2] 
http://cs.anu.edu.au/people/Ben.Lippmeier/project/thesis/thesis-lippmeier-sub.pdf




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ThreadScope: Request for features for the performance tuning of parallel and concurrent Haskell programs

2009-03-11 Thread Ben Lippmeier


Hi Satnam,

On 12/03/2009, at 12:24 AM, Satnam Singh wrote:
Before making the release I thought it would be an idea to ask  
people what other features people would find useful or performance  
tuning. So if you have any suggestions please do let us know!




Is it available in a branch somewhere to try out?

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-08 Thread Ben Lippmeier


On 08/03/2009, at 12:45 PM, Austin Seipp wrote:


For garbage collection, please see.

Accurate Garbage Collection in an Uncooperative Environment -
http://citeseer.ist.psu.edu/538613.html

This strategy is currently used in Mercury as well as Ben L.'s DDC
language; on that note, I think if you spent some time looking through
the runtime/generated code of DDC, you can see exactly what the paper
is talking about, because it's actually a very simple strategy for
holding onto GC roots:

http://code.haskell.org/ddc/ddc-head/runtime/


That paper explains the basic idea, but neither DDC or Mercury quite  
follow it (I asked Zoltan). The system in the paper keeps the GC roots  
in structs on the C stack, and chains the structs together as a linked  
list. The problem is that if you take a pointer to data on the C stack  
then GCC freaks out and disables a host of optimisations. I imagine  
it's worried about pointers going bad after the stack frame is popped  
and the space for the struct gets lost.


DDC keeps a shadow stack of GC roots in malloced memory. It's only a  
small difference, but lets the C compiler produce better code.


Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance question

2009-02-26 Thread Ben Lippmeier


Yep, this program will crawl.

You can get reasonable numeric performance out of GHC, but you need to  
work at it. There is some advice in the GHC manual at http://www.haskell.org/ghc/docs/latest/html/users_guide/faster.html 
.


The first thing I would do is replace your
isInCircle :: (Floating a, Ord a)  = (a,a) - Bool
with
isInCircle :: (Double, Double) - Bool

Ben.



On 26/02/2009, at 8:53 PM, hask...@kudling.de wrote:


Hi,

i have compared a C++ implementation with a Haskell implementation  
of the Monte Carlo Pi approximation:


http://lennart.kudling.de/haskellPi/

The Haskell version is 100 times slower and i wonder whether i do  
something obvious wrong.


Profiling says that the majority of the time is spend in main. But  
i have no idea where.


Can someone give me a hint?

Thanks,
Lenny




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance question

2009-02-26 Thread Ben Lippmeier


On 26/02/2009, at 9:27 PM, hask...@kudling.de wrote:


Currently i can only imagine to define a data type in order to use  
unboxed Ints instead of the accumulator tuple.


That would probably help a lot. It would also help to use two separate  
Double# parameters instead of the tuple.


The thing is that i don't see in the profile output yet what to  
improve.
There are some allocations going on in main, but i don't know what  
causes it.



The first thing I would do is replace your
isInCircle :: (Floating a, Ord a)  = (a,a) - Bool
with
isInCircle :: (Double, Double) - Bool


Can you point me to why that matters?


At the machine level, GHC treats the (Floating a, Ord a) as an extra  
argument to the function. This argument holds function pointers that  
tell it how to perform multiplication and = for the unknown type 'a'.  
If you use Double instead of 'a', then it's more likely to use the  
actual machine op.


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Re[4]: [Haskell] Google Summer of Code

2009-02-11 Thread Ben Lippmeier


A: X has some problems with runtime performance.
B: My work solves all your problems. There is no problem.

Beware of the Turing tar-pit in which everything is possible but  
nothing of interest is easy - Alan Perlis.


can /= can be bothered.

:)

Ben.


On 12/02/2009, at 5:26 PM, Daniel Peebles wrote:


These seem to be good starting points:

http://donsbot.wordpress.com/2008/05/06/write-haskell-as-fast-as-c-exploiting-strictness-laziness-and-recursion/
http://donsbot.wordpress.com/2008/06/04/haskell-as-fast-as-c-working-at-a-high-altitude-for-low-level-performance/
http://haskell.org/haskellwiki/Wc


On Wed, Feb 11, 2009 at 8:15 PM, Bulat Ziganshin
bulat.zigans...@gmail.com wrote:

Hello Don,

Thursday, February 12, 2009, 3:45:36 AM, you wrote:

You should do your own benchmarking!


well, when you say that ghc can generate code that is fast as gcc, i
expect that you can supply some arguments. is the your only argument
that ghc was improved in last years? :)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Animated line art

2008-12-05 Thread Ben Lippmeier


On 06/12/2008, at 6:34 AM, Andrew Coppin wrote:


Ben Lippmeier wrote:

The ANUPlot graphics library I wrote does exactly this.
The darcs repo is at http://code.haskell.org/ANUPlot/ANUPlot-HEAD/
It comes with lots of examples that do the sort of things you  
describe.


Does it handle drawing lines and circles (with antialiasing)? Can I  
save the output as PNG?


Lines and circles yes, antialiasing no. It uses OpenGL for rendering,  
so maybe there's a flag to turn it on. PNG isn't usually required for  
animations. When I need to make an image I just do a screenshot.


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Animated line art

2008-12-04 Thread Ben Lippmeier




On 05/12/2008, at 10:46 AM, Tim Docker wrote:
Someone else already mentioned FRAN and it's ilk. But perhaps you  
don't

need something that fancy. If you implement your drawing logic as a
function from time to the appropriate render actions, ie

| import qualified Graphics.Rendering.Cairo as C
|
| type Animation = Time - C.Render ()

then you just need to call this function multiple times to generate
sucessive frames.


The ANUPlot graphics library I wrote does exactly this.
The darcs repo is at http://code.haskell.org/ANUPlot/ANUPlot-HEAD/
It comes with lots of examples that do the sort of things you describe.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Sun Microsystems and Haskell.org joint project on OpenSPARC

2008-07-25 Thread Ben Lippmeier


http://valgrind.org/info/tools.html

On 26/07/2008, at 11:02 AM, Mitar wrote:


Hi!


If we spend so long blocked on memory reads that we're only utilising
50% of a core's time then there's lots of room for improvements if we
can fill in that wasted time by running another thread.


How can you see how much does your program wait because of L2 misses?
I have been playing lately with dual Quad-Core Intel Xeon Mac Pros
with 12 MB L2 cache per CPU and 1.6 GHz bus speed and it would be
interesting to check this things there.


Mitar


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Sun Microsystems and Haskell.org joint project on OpenSPARC

2008-07-24 Thread Ben Lippmeier


On 25/07/2008, at 8:55 AM, Duncan Coutts wrote:

Right. GHC on SPARC has also always disabled the register window when
running Haskell code (at least for registerised builds) and only  
uses it

when using the C stack and calling C functions.



I'm not sure whether register windows and continuation based back-ends  
are ever going to be very good matches - I don't remember the last  
time I saw a 'ret' instruction in the generated code :). If there's a  
killer application for register windows in GHC it'd be something tricky.


I'd be more interested in the 8 x hardware threads per core, [1]  
suggests that (single threaded) GHC code spends over half its time  
stalled due to L2 data cache miss. 64 threads per machine is a good  
incentive for trying out a few `par` calls..


Ben.

[1] http://www.cl.cam.ac.uk/~am21/papers/msp02.ps.gz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANNOUNCE: Sun Microsystems and Haskell.org joint project on OpenSPARC

2008-07-24 Thread Ben Lippmeier


On 25/07/2008, at 12:42 PM, Duncan Coutts wrote:


Of course then it means we need to have enough work to do. Indeed we
need quite a bit just to break even because each core is relatively
stripped down without all the out-of-order execution etc.


I don't think that will hurt too much. The code that GHC emits is very  
regular and the basic blocks tend to be small. A good proportion of it  
is just for copying data between the stack and the heap. On the  
upside, it's all very clean and amenable to some simple peephole  
optimization / compile time reordering.


I remember someone telling me that one of the outcomes of the Itanium  
project was that they didn't get the (low level) compile-time  
optimizations to perform as well as they had hoped. The reasoning was  
that a highly speculative/out-of-order processor with all the  
trimmings had a lot more dynamic information about the state of the  
program, and could make decisions on the fly which were better than  
what you could ever get statically at compile time. -- does anyone  
have a reference for this?


Anyway, this problem is moot with GHC code. There's barely any  
instruction level parallelism to exploit anyway, but adding an extra  
hardware thread is just a `par` away.


To quote a talk from that paper earlier: GHC programs turn an Athlon  
into a 486 with a high clock speed!


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: The Disciplined Disciple Compiler - alpha 1.1

2008-07-03 Thread Ben Lippmeier


Hi All,
I'm pleased to announce version 1.1
   of the Disciplined Disciple Compiler (DDC)

Disciple is an explicitly lazy dialect of Haskell which supports:
 - first class destructive update of arbitrary data.
 - computational effects without the need for state monads.
 - type directed field projections.
 - allied functional goodness.

All this and more through the magic of effect typing.

New in this version:
 - support for x86_64 under linux and darwin, thanks to Jared Putnam.
 - the -make flag now does a full dependency driven build/rebuild.
 - constructor classes.
 - irrefutable patterns.
 - partial support for monadic do notation.
 - an unboxed Bool# type with true# and false# literals.
 - field projection punning.
 - lots more example code.

Project page with full release notes and download at:
  http://www.haskell.org/haskellwiki/DDC

DDC: more than lambdas.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] one-way monads

2008-05-20 Thread Ben Lippmeier


On 20/05/2008, at 3:54 PM, Zsolt SZALAI wrote:


Here comes IO and one-way monads, where the internal state can not be
extacted, and seems, that the internal data is global to the program.
Hows that could be? Is it just because main::IO() or because the
implementation of IO uses external C functions and there are no
internal state in the monad itself at all?


Operationally, the IO monad doesn't have any internal state. The  
'state' would be the outside world - which can't really be represented  
at runtime.


The IO monad is a state monad that passes around a special token that / 
represents/ the outside world. What the token actually is doesn't  
matter, but passing it around in a single threaded manner in the Core  
IR provides data dependencies that ensure that operations on the world  
happen in the correct order. References to this token actually get  
erased before code generation. In GHC this erasure is called the  
'state-hack'.



And why one can not write a function that makes an IO computation and
the return type does not include and IO contructor? It is a feature of
the language and specific to IO, or anybody  could write a monad with
this property(one-way), assuming the implementation does not use
external languages?


IO isn't a feature of the language, it's a type defined in the  
library. You can define your own IO-style monads if you like.



Or the one-way property is just that, there is no such functions, that
allow extracting internal data?


Just that. It's one way because there are no functions convert an 'IO  
a' into an 'a' (mostly). Doing that would break the desired single- 
threadedness property.


If you're convinced that violating this single-threadedness won't  
break your program you can use unsafePerformIO :: IO a - a, but this  
performs the action, it doesn't give you the actual world token.


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Typing with co/contra-variance.

2008-04-03 Thread Ben Lippmeier

Hi all,
I have some quick questions for the type theory people:

If I write an expression:
 (if .. then 23 else Erk)

In Haskell this would be an error, but perhaps I can assign it the type 
'Top' (or 'Any') and then use reflection ala Data.Dynamic to inspect the 
type of this object at runtime and cast it back to something useful...



But if I write:

isEven Erk

where
 isEven :: Int - Bool
 isEven i
  = case i of
  1 - False
  2 - True
  ...

Then this would be a genuine type error, because the case-expression 
demands an actual Int at runtime.



Questions:
1) Could I perhaps wave my arms around and say something about the Int 
in the type of isEven being in a  contra-variant position? Is this what 
is actually happening?, and can someone point me to a good paper?


2) This seems simpler than real intersection types, which I know are 
problematic. If I only want 'Top' and not a real object system like in 
Scala or Java, then can I just add this to a HM/unification style 
inference algorithm and expect it to work? I'm thinking the unifier only 
needs to know the variance of the types it's unifying to decide whether 
to throw an error or return Top - or will there be problems with higher 
order functions etc?



I had a read through the subtyping chapter in Types and Programming 
Languages by Pierce - it gives the typing rules but don't mention 
inference or implementations.


Scala seems to have this 
(http://www.scala-lang.org/intro/variances.html) but as I understand it, 
their system doesn't support full type inference without user supplied 
annotations...


BTW: searching for 'the any type' in Google helps even less than you 
might expect :)


Thanks, Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: The Disciplined Disciple Compiler - alpha 1

2008-03-20 Thread Ben Lippmeier

Hi All,
I'm pleased to announce the initial alpha release
of the Disciplined Disciple Compiler (DDC).

Disciple is an explicitly lazy dialect of Haskell which includes:
- first class destructive update of arbitrary data.
- computational effects without the need for state monads.
- type directed field projections.

All this and more through the magic of effect typing.

More information (and download!) available from:
   http://www.haskell.org/haskellwiki/DDC
or  http://code.google.com/p/disciple

DDC: more than lambdas.

Onward!
Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: The Disciplined Disciple Compiler - alpha 1

2008-03-20 Thread Ben Lippmeier


Hi Wolfgang,
Rest assured it really is a Haskell project. Apart from the way it  
does field projections, DDC will compile a significant number of  
Haskell programs with no modifications.


Ben.


On 21/03/2008, at 1:08 AM, Wolfgang Jeltsch wrote:
Short question: Is it appropriate to put the homepage of a non- 
Haskell project
on the Haskell Wiki?  I mean, putting some basic info about such a  
project
there and link to the project’s website might be okay and is already  
done in

certain cases.  But projects like Agda or Epigram typically don’t use
haskell.org as a webspace provider and I think this is the way to  
go.  What

do others think?

Best wishes,
Wolfgang
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: The Disciplined Disciple Compiler - alpha 1

2008-03-20 Thread Ben Lippmeier

Hi Tim,
DDC doesn't aim for Haskell 98 compliance - its really a superset of a  
subset of it - but I've followed Haskell syntax and philosophy where  
ever possible. The compiler is written in Haskell and I want DDC to  
support as much of it as possible to make its eventual boot-strapping  
easier. There's much more common ground between Disciple and Haskell  
than say, ML and O'Caml, or Haskell and Clean.


Most of the expression syntax is there eg: function binding, lambdas,  
let, where, pattern matching, case expressions, pattern guards, data  
type definitions, class and instance definitions etc etc.


For the alpha version at least, the main deviations are:
 - dictionary passing is not finished.
 - you'll need to put region annots on recursive data type defs as  
the elaboration isn't quite finished.

 - it uses strict evaluation as default
 - field projections are different
 - no monadic desugaring in do notation.
 - no irrefutable patterns yet.

The rest is all Haskell 98 (minus all the effect typing extensions, of  
course!).


For the alpha2 release I'm hoping that most straight-up Haskell 98  
programs will compile with it after some cosmetic modifications:  
mostly adding the suspension operator where appropriate, and using the  
new field projection syntax.


Ben.



Can you elaborate on a significant number of Haskell programs? Do
you expect that DDC can compile any Haskell (98?) program except some
weird corner cases, or are you aware of a particular class of Haskell
programs it currently can't compile?

(I'm asking in order to find out whether DDC would potentially be
useful for my work, not so as to question whether it should be on
haskell.org (I don't care about that :-))

Cheers,
Tim

--
Tim Chevalier * http://cs.pdx.edu/~tjc * Often in error, never in  
doubt

It's easy to consider women more emotional than men when you don't
consider rage to be an emotion. -- Brenda Fine


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Annotating GHC assembler output?

2008-03-03 Thread Ben Lippmeier

Hi Justin.

try:  ghc -c file -ddump-to-file -ddump-asm

You should get a .dump.asm file in the same place as file which still 
has symbols named after the source functions. Keep in mind though that 
the continuation passing style (CPS) conversion done in the back end of 
GHC causes the code not to look like something you might get out of, say 
GCC.


Ben.


Justin Bailey wrote:

I'm interested in seeing what kind of assembler my functions turn
into. Is there a means of annotating assembler output, similar to the
{#- CORE -#} pragma? Is there a trickier way of doing it?

Justin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Annotating GHC assembler output?

2008-03-03 Thread Ben Lippmeier


And to answer your actual question..
No - notes in the core language get stripped out during conversion to STG.

Ben.

Justin Bailey wrote:

I'm interested in seeing what kind of assembler my functions turn
into. Is there a means of annotating assembler output, similar to the
{#- CORE -#} pragma? Is there a trickier way of doing it?

Justin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
  


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Examples of useful functions of order = 3?

2007-12-14 Thread Ben Lippmeier

Hi all,

I'm working on a type-based side effect analysis for a Haskell-like
language, and need to test it against some more higher order functions.
The situation is a bit icky because it uses bounded quantification
and various related issues (ie covariance vs contravariance) only
come into play when dealing with functions of order = 3.

Trouble is, the vast majority of useful higher order functions
that I can think of are of order 2. Oh, I can certainly invent toys
like:

order3 f   = f succ
order4 b f = if b then f else order3

.. and so on, but I'm left feeling unsatisfied.

I vaguely remember a paper called something like Is there any use
for a seventh order function?, but I forget why they were used, and
I can't find it on Google, or ACM or any of the other likely places.

Does anyone have any examples of code (in whatever language) which
usefully uses functions of order = 3?? Preferably up to 5?

Thanks,
Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] 2D game graphics library for Haskell?

2007-08-24 Thread Ben Lippmeier
 

Hi Peter, 

The OpenGL/GLUT bindings support all the things you would want, but it's a
bit too much pain for first year students. 

 

For the last couple of years at the ANU (Australian National University,
Canberra) we've been using a front end library that I wrote which is similar
to SOE/HGL but with support for images, animation, alpha blending and some
other things. I think the real trick is hiding enough of the OpenGL/GLUT
internals to make it suitable for first year students, while at the same
time exposing enough functionality so they don't feel constrained by what
they can do with the library. Usually we think of the project we want the
students to do, then supply most of the infrastructure via the library -
leaving the students to fill in the 'fun' stuff.

 

There is the added benefit that the OpenGL/GLUT bindings (and hence our
library also) compiles out of the box on both Linux and Windows. We use
linux machines at uni for the student labs, but students have been able to
take their code home and get it running on their home Windows PC's without
much difficulty.

 

You can get our library (with examples) from my homepage at
http://cs.anu.edu.au/people/Ben.Lippmeier/

 

I've also got a simple asteroids game which I wrote with an extended version
of it. There is a playable version, but unfortunately it's on my home
machine that is now packed in storage while I'm at MSR, Cambridge. I'm
getting back to Canberra late October so if you're still interested I can
dig it out and send you a copy then.

 

Cheers,

Ben.

 

 

 

From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of peterv
Sent: 24 August 2007 11:32
To: haskell-cafe@haskell.org
Subject: [Haskell-cafe] 2D game graphics library for Haskell?

 

I'm currently playing around with SOE to make some simple interactive math
exercises for students. This worked fine, although I could have done this
much faster using C# (which I know very well), but since I'm addicted to
Haskell now, I used the latter language ;) Furthermore, I hope that one day,
I will know enough Haskell to learn it to the students, because I feel that
functional programming should not be given in the last bachelor or master
years, since most software engineering students then know OO programming
extremely well and have a horrible time with FP (I currently did not meet
anyone in my sector of game development that liked FP, and many of those
people had a masters degree and some were PhDs)

 

Anyway, SOE is great for learning Haskell, but it lacks a couple of
fundamental functions to make it really attractive, like:

 

-  Support for images

-  Support for rendering to an offscreen graphics surface and
reading the pixels from that surface (for pixel-wise collision detection)

-  Support for detecting non-ASCII key presses (cursor keys, etc)

-  Support for joysticks

 

Concurrent Clean seems to have a nice 2D game library and PLT/DrScheme also
has nice support for basic 2D graphics, but somehow I feel Haskell is more
mature and more elegant. 

 

So before digging into advanced APIs (like GTK itself, which I know
nothing about, I'm a Win32 GDI/XNA/WPF expert), I should ask the question if
something similar exists? It has to be as simple as SOE.

 

Would it be possible to extend the GTK SOE with support for the features
mentioned above? Is this insanely difficult for someone like me who knows a
lot about Win32 but little Haskell?

 

Thanks,

Peter Verswyvelen

 

 

No virus found in this outgoing message.
Checked by AVG Free Edition.
Version: 7.5.484 / Virus Database: 269.12.4/969 - Release Date: 23/08/2007
16:04

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Evaluating arithmetic expressions at run time

2006-01-27 Thread Ben Lippmeier


Andrew Savige wrote:
 Haskell beginner using GHC.
Hello there!


How about,

opTable
 = [ (+, (+))
   , (-, (-)) ... ]

myeval x y op
 = let  Just fun = lookup op opTable
   in   x `fun` y

?




I have a function to do some simple arithmetic at run time:

myeval :: Int - Int - String - Int
myeval x y + = (+) x y
myeval x y - = (-) x y
myeval x y * = (*) x y
-- ...

While that works, I'm curious to know if it can be done more
elegantly. I'm thinking of something like:

myeval :: Int - Int - String - Int
myeval x y op = (read op) x y

Thanks,
/-\




 
Do you Yahoo!? 
Listen to over 20 online radio stations and watch the latest music videos on Yahoo! Music. 
http://au.launch.yahoo.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Evaluating arithmetic expressions at run time

2006-01-27 Thread Ben Lippmeier


BTW: There isn't an easy way to magically derive opTable. We're relying 
on the fact that these simple functions all have the same type.


Ben Lippmeier wrote:


opTable
 = [ (+, (+))
   , (-, (-)) ... ]

myeval x y op
 = letJust fun = lookup op opTable
   in   x `fun` y




You could also do

myeval x y op
 = case op of
+   - x + y
-   - x - y
...

is probably the smallest way.

Ben.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout favouring C

2006-01-16 Thread Ben Lippmeier


Daniel Fischer wrote:
However, I'm curious. Is the behaviour I reported peculiar to my flavour of 
linux, or does the same trend show on other machines?

Would you be so kind to test and report?


Daniel,
For doing benchmarks with short runtimes on linux, you might want to 
double check that the CPUSpeed daemon isn't changing your clock rate 
while you're trying to do your measurements.


Modern AMD chips, and some mobile Intel chips have clock rate scaling. 
The CPUSpeed daemon on my AMD64/Fedora4 machine scales the clock rate 
between 1 and 2Ghz depending on system load.


Send the daemon SIGUSR1 to force maximum speed, and check /proc/cpuinfo 
to make sure.


This has certainly confused me in the past.

Ben.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] puzzling memory leak? (GHC)

2005-10-10 Thread Ben Lippmeier


Young,
This sounds like a text-book space-leak caused by lazy evaluation.

In a lazy language, function evaluation is driven by the need to perform 
IO. Because the first version of your program never prints the parsed 
structure, the parser doesn't completely parse it. This implies that the 
system needs to hang onto all the input data (as well as all the 
partially evaluated calls to the parser) incase it needs it later on.


The default string representation is Haskell is pretty abysmal, and 
having it use hundreds of megs to represent, say a 10 meg file is not 
too surprising.


By modifying your fixArts function to print the parsed structure you've 
forced the system to actually finish evaluating the parser, which allows 
the input data to be garbage collected.


My advice is that if you don't want to fool around with it, just swallow 
hard, then change fixArts to do a hPutArts to /dev/null.


Either that or
  1) go read about the DeepSeq library.
  2) add strictness annotations to your structure definition.

Ben.



fixArts ((Right x):xs) = do hPutArts stderr x
fixArts xs


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Mixing monadic and non-monadic functions

2005-09-08 Thread Ben Lippmeier


 Do you disagree with that
 assertion? Or are you just saying that the syntax change as I propose
 it is called effect analysis?

Sorry, I thought you were talking about the usability of IO and State 
monads in particular, not more general ones.


When I was talking about effect analysis it was along the same lines as 
how (fun1 :: a - IO b) and (fun2 :: a -{IO} b) are basically 
equivalent - though IMHO fun2 can be easier to use because it's type is 
a bit more orthognal to a pure function (fun3 :: a - b).


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How to debug GHC

2005-09-02 Thread Ben Lippmeier



... It's very hard to debug a large program when you
can randomly get messages like *** Exception: Prelude.head: empty
list and have no idea where they came from. 




As a purely pragmatic suggestion: don't use head, fromJust, last, or any 
other function that is likely to fail in impossible-to-find way, at 
least not directly.


In GHC, you can wrap or replace them with irrefutable patterns which are 
 almost as easy to write, and will give you a sensible error message if 
they fail.


Example:

 replace  x= head xx
 with (x:_)= xx

 replace  x= fromJust mX
 with (Just x) = mX

 replace  x= last xx
 with
  y@(_:_)  = xx
  x= last y


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monadic vs pure style (was: pros and cons of sta tic typing and side effects)

2005-08-30 Thread Ben Lippmeier



There is no inherent advantage or disadvantage
to monads. If the idea is most clearly expressed
as a monad, use a monad. If the idea is most
clearly expressed recursively, write it recursively
(but perhaps wrap it in return).


Perhaps the inherent disadvantage is that functions written in the 
monadic style must have different types compared with their conceptually 
similar non-monadic functions..


mapM:: Monad m = (a - m b) - [a] - m [b]
map :: (a - b) - [a] - [b]

filter  :: (a - Bool) - [a] - [a]
filterM :: (Monad m) = (a - m Bool) - [a] - m [a]

foldl   :: (a - b - a) - a - [b] - a
foldM   :: (Monad m) = (a - b - m a) - a - [b] - m a

Some would say but they're different functions!, others would say 
close enough.


I imagine this would be an absolute pain for library writers. Notice 
that we get Data.Map.map but no Data.Map.mapM - or perhaps there's some 
magical lifting combinator that I am not aware of?


Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Monadic vs pure style (was: pros and cons of sta tic typing and side effects)

2005-08-30 Thread Ben Lippmeier


Perhaps you could write _everything_ in monadic style, and then derive 
the non-monadic version by running it on an empty state monad. But 
then if everything was already monadic you wouldn't need the non-monadic 
version..  :)


...

Perhaps the inherent disadvantage is that functions written in the 
monadic style must have different types compared with their conceptually 
similar non-monadic functions..


mapM:: Monad m = (a - m b) - [a] - m [b]
map :: (a - b) - [a] - [b]

filter  :: (a - Bool) - [a] - [a]
filterM :: (Monad m) = (a - m Bool) - [a] - m [a]

foldl   :: (a - b - a) - a - [b] - a
foldM   :: (Monad m) = (a - b - m a) - a - [b] - m a

Some would say but they're different functions!, others would say 
close enough.


I imagine this would be an absolute pain for library writers. Notice 
that we get Data.Map.map but no Data.Map.mapM - or perhaps there's some 
magical lifting combinator that I am not aware of?


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Proposal: deriving ShallowEq?

2005-07-19 Thread Ben Lippmeier


Hello,

I often find it useful to determine whether two objects are using the 
same constructor, without worrying about the constructors' arguments.


An example, using some arbitrary data type Thingo:

 class ShallowEq a where
  shallowEq  :: a - a - Bool

 data Thingo a b
= TOne   a
| TTwo   a b Int Char Float
| TThree Int Char b b

 (TOne 23) `shallowEq` TOne{}
True

 (TThree 5 'c' True False)  `shallowEq` TTwo{}
False

--
Having some sort of generic shallowEq operator reduces the need for a 
host of predicates such as: (this one from Data.Maybe)


 isJust x
  = case x of
Just {} - True
_   - False

.. which is an approach that is obviously going to be tedious when the 
size of the data type becomes large.


--
There is way to hack together a partial implementation of the ShallowEq 
class within GHC, but it leaves much to be desired:


 instance Show a = ShallowEq a where
  ([EMAIL PROTECTED]) a b
= (head $ words $ show a) == (head $ words $ show b)

Notice that in the example the term TTwo{} is a partially constructed 
record. The implementation relies on laziniess to avoid trying to show 
the missing fields (which would fail).


--
Questions:
 1) Does anyone know a better/existing way to implement ShallowEq that 
doesn't involve enumerating all the constructors in the data type?


 2) If not, can anyone think of reasons why it wouldn't be a good idea 
for GHC to derive ShallowEq (by expanding said enumeration)?



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: deriving ShallowEq?

2005-07-19 Thread Ben Lippmeier

Ralf Lammel wrote:

As Bulat points out, the GHC primitive dataToTag#
indeed nicely solves the problem. Ben, just for
completeness' sake; with SYB, you get such reflective
information too (and others):

shallowEq :: Data a = a - a - Bool
shallowEq x y = toConstr x == toConstr y

(dataToTag# returns Int, while toConstr comprises other things like the
constructor name.)



Ralf,
Yes, I ended up using the propper SYB approach instead, though I have 
noticed that the reflection data types Constr and DataRep make no 
mention of type variables or functions.


For example, this works fine:
 getTag (Just 5)   ==# getTag (Just{})
 getTag (Just (\x - x))   ==# getTag (Just{})

But this does not
 toConstr (Just 5) == toConstr (Just{})
Ambiguous type variables.

 toConstr (Just (\x - x)) == toConstr (Just{})
No instance for Data (t - t)

I appreciate the reasons why this is so, though I think it's interesting 
to see the practical consequences.


A toConstr version of shallowEq works ok so long as you provide a type 
signature to constrain both arguments to be the same type, and one of 
them is always fully constructed - which is fine for me at the moment.


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Strictness question

2005-06-07 Thread Ben Lippmeier


 To gloss over details: it'll reduce x far enough so it knows that
 it's an Integer, but it won't nessesarally compute that integers
 value.

 No, Integers don't contain any lazy components.
 It statically knows that it's an integer.

I meant that it would reduce to the outermost constructor but 
nessesarally evaluate the rest of the object.


Ok, I actually looked up the implementation of Integer in GHC.

 -- | Arbitrary-precision integers.
 data Integer   
   = S# Int# -- small integers
 #ifndef ILX
   | J# Int# ByteArray#  -- large integers
 #else
   | J# Void BigInteger  -- .NET big ints

You were right and I was wrong, Integers contain no lazy components. 
Perhaps that just highlights the folly of guessing how much actually 
gets evaluated in a lazy language.. :)


Ben.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Where do you use Haskell?

2005-05-02 Thread Ben Lippmeier
Daniel Carrera wrote:
I've been reading, and I'm really liking the elgance of Haskell, and FP 
as a whole. But I wonder about the range of applicability.
Oooohh baby, you sure have stumbled across the proverbial can'o'worms.
Yes, others have asked themselves the question, and some have 
dedicated a large slab of their waking hours trying to make some sense 
of it all.

Many aspects of functional programming are generalisations of what goes 
on in imperative languages. One might expect this would make FP somewhat 
_more_ applicable, which is true - but there are complications.

You might like to take a deep breath and start with:
Tackling the awkward squad: monadic input/output, concurrency, 
exceptions, and foreign-language calls in Haskell - Simon Peyton Jones

http://research.microsoft.com/Users/simonpj/papers/marktoberdorf/
It might give you a better overview of the problem. As for the 
solution.. uh... stay tuned.

PS: Keep asking those questions.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] how do I avoid excessive constructor application?

2005-03-01 Thread Ben Lippmeier
S. Alexander Jacobson wrote:
For some reason, these two functions have different types.
  fun1 f (Left x)= Left (f x)
  fun1 _ r@(Right x) = Right x
  fun2 f (Left x) = Left (f x)
  fun2 _ r = r
fun1 :: forall a a1 b . (a - a1) - Either a b - Either a1 b
fun2 :: forall a b. (a - a)  - Either a b - Either a  b
fun1 is indeed more general than fun2 because there is no way for an x 
inside a (Left x) from the LHS of the function to be returned as part of 
the result.

---
You can play games with the type checker to force them to have the same 
type without changing the meaning of your function.

fun1' f (Left x)= if True then Left (f x) else Left x
fun1' _ r@(Right x) = Right x
 :type fun1'
fun1' :: forall b a. (a - a) - Either a b - Either a b
This assumes that the compiler doesn't perform an optimisation that 
throws away the second alternative of the if statement before it does 
type checking.

---
A more sensible way is to add an explicit type signature to force it to 
have a less general type than what was inferred.

fun1 :: forall a b . (a - a) - Either a b - Either a b

Is there a way to rewrite fun2 so that f has type (a-b)?
Delete the second line, but then you have a different function.

In the general case, it seems wasteful to have to destruct and construct 
values just for type checking reasons, especially if your type has many 
more constructors than (Either a b).
Standard type inference always returns the *most general* type, and it 
is never wrong (unless there's a bug in the compiler).

If you actually want a less general type for one of your functions 
(maybe the more general one isn't useful in your particular program) 
then add a type signature to constrain it.

Ben.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-Cafe] FFI and foreign function returning a structure

2005-03-01 Thread Ben Lippmeier
Dimitry Golubovsky wrote:
Hi,
If I have a C function returning a structure (not a pointer, but 
structure itself), can this be accomodated via FFI?
No. The way data is organised in memory is dramatically different in 
Haskell when compared with C. You need to write functions to read in 
each field in turn and then reconstruct the structure on the Haskell 
side.

It's a tedious process. My advice is that if you have a lot of 
structures to read, write a (simple) preprocessor to generate the 
marshalling code.. that's what I did.

Ben.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] Implicit parallel functional programming

2005-01-20 Thread Ben Lippmeier
Satnam Singh wrote:
Ample looks interesting. What license does it use? I had a quick look
over the source and can't find anything. Is there a port to Windows or
does it not do any OS specific UI/graphics?
By default, we're releasing it under GPL. I guess I should place a copy 
of the license somewhere in the tarball...

For the graphs it produces, it just emits a gnuplot source file 
(textfile, easy) and then automagically calls gnuplot to display that file.

It would probably work straight out of the box under cygwin (or similar) 
with an appropriate build of gnuplot in your current path.

Failing that, you (or I) could easilly comment out the call gnuplot 
stage and end up with a vanilla console-IO application.

If it gives you any trouble then send me a mail and i'll fix it.
... There's also a paper that goes with it, which used to be on Clem's 
page but I see that the link is broken now, i'll get fixed.

...
a (very slow) simulator you might want to poke around with. I wrote it 
based around Clem Baker-Finch's Abstract machine for parallel lazy 
evaluation, which supports fully speculative implicit parallelism.

There's a link to it on my homepage at 
http://cs.anu.edu.au/people/Ben.Lippmeier

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] FFI woes!

2004-12-15 Thread Ben Lippmeier
System.Mem.performGC?
Sebastian Sylvan wrote:
Another question!
Is there a way to force the garbage collector to kick in?
I''m trying to find out if my finalizer gets called correctly but I
don't know if the garbage collector is run.
/S

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: The State Monad

2004-10-07 Thread Ben Lippmeier
John Goerzen wrote:
Which leaves me with an odd curiosity -- I still can't figure out how
state monads are anything but syntactic sugar (and lead more to
spaghetti code at that g)
Perhaps because state monads = syntactic sugar.
The state monad is just a nice(er) way of passing around some global 
state (Junk).

Without state monads
f :: Junk - a - (Junk, b)
With state monads,
f :: a - State Junk b

Though if some function doesn't need to 'modify' your Junk, you find 
yourself having to re-factor things like,

decend :: Junk - Exp - Exp
decend state (Node a t1 t2)
 = Node a (decend state t1) (decend state t2)
into
decend :: Exp - State Junk Exp
decend (Node a t1 t2)
 = do
t1' - decend t1
t2' - decend t2
return   $ Node a t1' t2'
.. which IMHO is not as pretty.
Ben.
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Space leaks

2004-10-05 Thread Ben Lippmeier
In my experience, any time your program makes use of some state-like 
structure which gets updated over a number of iterations, you're going 
to be in trouble with space leaks.

There's a library function which summarises this nicely,
mapAccumL :: (state - x - (state, y)) - state - [x] - (state, [y])
The GHC library docs describe mapAccumL as
The mapAccumL function behaves like a combination of map and foldl; it 
applies a function to each element of a list, passing an accumulating 
parameter from left to right, and returning a final value of this 
accumulator together with the new list.

Now imagine that state is some large structure. The [x] list is a couple 
of hundred elements long, and you print out some - but not all - of 
the [y] elements. While the [y] list remains live, a whole collection of 
 half evaluated intermediate states also remain live -  enter the 
massive space leak.

Projects I have written which suffered massive space leaks include:
- A parallel lazy evaluator,
http://cs.anu.edu.au/ample.
The state:the machine state, threads, stacks, the heaps.
The [x] list: machine instructions.
The [y] list: profiling info.
If you don't print out all possible profiling info then all those 
intermediate states remain live and you've got a massive space leak.

- An astroids game I wrote for X.
The state:position, velocities of ship, astroids, missiles.
The [x] list: ship control data, key codes.
The [y] list: a list of graphics prims which get rendered to the screen.
You would think that because the list of prims gets consumed by the 
renderer it wouldn't leak.. but it did, about 200k / sec. Then I changed 
some seemingly irrelevant part of the code and the space leak went 
away.. and I have no idea why. yay.

- My ICFP contest entry for this year,
Not really a mapAccumL type problem, but still a space leak.
At one stage I had a bug in the parser for my assembly language where it 
didn't handle comments properly. One of the entries I ran through my 
simulator had comments at the end of lines with Drop statements, but 
nowhere else.

The simulator ran fine until an ant found some food, carried it back to 
the hill, then attempted to drop it.. parser error. My Haskell program 
hadn't fully evaluated the thunks to read / parse / assemble that line 
of code until the ant had tried to use that part of the program.. I 
laughed, and laughed.. :).

...
I think the only way to totally slay bugs like these is to use some 
deepSeq'esque function on all your intermediate states, or any time when 
you reckon some evaluation should be 'finished'. Either that or explicly 
define intermediate structures to be fully strict.

I see the GHC people have got a seqExpr function in 
compiler/coreSyn/CoreSyn.lhs, which I imagine would be applied to the 
expression tree after every compiler stage.

Ben.

Graham Klyne wrote:
I've been starting to take note of discussion about space leaks in 
Haskell.  So far, it's not a topic that's bothered me, other than 
obvious programming errors, and I've never found the need to force 
strict evaluation of my Haskell programs.  I find myself wondering why 
this is.

Your comment about arithmetic expressions is telling:  the kind of 
program I have been writing (parsers, symbolic processing, etc.) 
performs almost no arithmetic.  (So far, I've used lists instead of 
arrays, so a usual source of arithmetic functions is missing.)

I've also worked with datasets that fit in memory, so failure to 
stream data hasn't been a problem.  I suspect that's the more 
pernicious case for space leaks, since the causes aren't always so obvious.

Are there any guidelines or warning signs to look out for that may be 
indicative of potential space-leak problems?  So far, I can think of:
- arithmetic results
- multiple uses of a large data value

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ambiguous type variable

2004-07-26 Thread Ben Lippmeier
Arjun Guha wrote:
AlphaBeta.hs:1:
Ambiguous type variable `v' in the top-level constraint:
  `Ord v' arising from use of `maxValue' at AlphaBeta.hs:12 

Another person had a similar problem just the other week.. The error 
messages are different, but the problem is the same.

Read the messages in this thread:
http://www.haskell.org//pipermail/haskell-cafe/2004-July/006424.html
Then read
Type classes with functional dependencies, Mark P. Jones, 2000
You can get the paper at http://www.cse.ogi.edu/~mpj
Ben.

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] optimising for vector units

2004-07-25 Thread Ben Lippmeier
Matthew Roberts wrote:
Does anybody know if any of the Haskell compilers are able to optimise for
vector units (like MMX, SSE(2), 3D_Now! and AltiVec)?
 

No, not as yet. FP systems don't generally provide enough control over 
how data is laid out in memory to be able to invoke SIMD operations on 
it (or control data locality).

I suppose you could add an unboxed Float32x4 type and appropriate 
instances of IOArrays etc to GHC, but if you wanted to do anything with 
it you'd have to use specialised unboxed operations.. and it'd probably 
be more trouble than just writing it in assembler.

I would have thought that if a developer cared enough about the 
performance of their program to turn to non-portable SIMD extensions, 
they'd want to write it in assembler anyway so they had absolute control 
over what was going on..

... though it would be nice to be able to define
a + b :: (Float, Float, Float, Float) - (Float, Float, Float, Float) - 
(Float, Float, Float, Float)

and expect it to go via SSE..
Ben.
My investigations have revealed that c requires special programming
constructs to make the most of these processor capabilites (i.e. the
compiler is not able to work it out for itself).
Is there any work on getting haskell compilers to work this kind of thing
out?
Cheers,
Matt
 

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inferred type is not general enough

2004-07-08 Thread Ben Lippmeier
Ivan,
I don't yet know how to explain this formally, but I know how to fix 
your problem..

You need to add a parameter to the Packet class so the compiler knows 
what type to use for the Location.

.. The following code works for me.. You'll need to use a 
compiler/interpreter which supports multi-parameter type classes.. like 
ghci with the glasgow extensions turned on

-
class Location a where
   point :: a - String
class Packet a loc where
   source   :: Location loc =  a - loc
   destination  :: Location loc =  a - loc
   size :: Num b = a - b
--
data TestLocation
   = TestSource
   | TestDestination
data TestPacket = TestPacket
--
instance Location TestLocation where
   point a = location
instance Packet TestPacket TestLocation
where
   source p= TestSource
   destination p   = TestDestination
   size p  = 99

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Inferred type is not general enough

2004-07-08 Thread Ben Lippmeier
Ivan..
Uh.. by 'works' I meant 'compiles' :)
Here is a fixed version..
As I understand it, the 2 parameter class (Location loc = Packet p loc) 
means
 loc is a Location, and types p and loc are related by class Packet

with just that information, if you try (as you did) something like
$ source TestPacket
Then you've given the compiler the type for 'p' (TestPacket), but it 
won't go the distance and decide that type for 'loc' is TestLocation.. 
even though that's the only instance you've given it.

You can tell the compiler that The type of 'p' sets the type of 'loc' 
with a functional dependency.. which is the | p - loc annotation at 
the end of the class definition in the new version below.

There is a paper which covers exactly this problem, in detail. I suggest 
you read it (I did).

Type classes with functional dependencies, Mark P. Jones, 2000
You can get it from his homepage at, http://www.cse.ogi.edu/~mpj/
Ben.

Ivan Tihonov wrote:
It compiles well, but doesn't work for me :(
BTW: (email answers are not always real-time :) )
new version:
--
class Location a where
   point:: a - String
class Location loc = Packet p loc | p - loc where
   source   :: p - loc
   destination  :: p - loc
   size :: Num b = p - b
--
data TestLocation
   = TestSource
   | TestDestination
   deriving (Show)
data TestPacket = TestPacket
--
instance Location TestLocation where
   point a= location
instance Packet TestPacket TestLocation
where
   source p   = TestSource
   destination p  = TestDestination
   size p = 99

___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


  1   2   >