Re: [Haskell] Guidelines for respectful communication

2018-12-07 Thread Ben Lippmeier

> On 7 Dec 2018, at 6:47 pm, Jonathan Lange  wrote:
> 
> In particular, her suggestion about pairing guidelines for respectful 
> communications with guidelines for what to do when things break down is an 
> excellent one, and has worked well in other communities to help those on the 
> fringes of a community feel welcome and able to contribute.


I’ll also back this up. Over the last couple of years I’ve been involved in 3 
separate communities which have struggled with many of the same issues. The way 
I see it, guidelines for Respectful Communication are statements of the desired 
end goal, but they don’t provide much insight as to the root causes of the 
problems, or how to address them. At the risk of trivialising the issue, one 
could reduce many such statements to “Can everyone please stop shouting and be 
nice to each other.” (CEPSSaBNTEO)

Here are two templates for problems that I’ve seen over and over, and not 
necessarily in this community. The names used are placeholders.

1) Alice has become very interested in a particular technical issue and wants 
to change the direction of Project X to address it. Alice has contributed to 
Project X on and off, but did not start it and is not currently leading it. The 
main developer is Bob who agrees that the issue exists, but is focused on other 
things right now, and isn’t motivated to have a long discussion about something 
he sees as a minor detail. Alice continues to post on a public list about the 
issue, until Bob becomes exasperated and replies with something like “yes, but 
I don’t care about that right now”. Alice thinks the comment is directed at her 
personally, posts a hurt reply, then Charlie, Debbie, and Edward chime in about 
whether or not that was an appropriate communication. There is a thread on 
Reddit with 50 comments from people that Alice and Bob have never heard of. 
Both Alice and Bob are demotivated by the whole experience, and future 
potential contributors to Project X stumble across the Reddit post and decide 
they don’t want to get involved anymore.

2) Charlie and Debbie have been building System Y for the last 10 years as a 
side project, which over time has grown to be a key part of the public 
infrastructure. Both Charlie and Debbie are well known and respected by the 
community, but don’t always have time to fix bugs promptly. System Y also has 
some long standing issues that everyone grumbles about, but also know how to 
work around. Edward works for Company Z, which has recently formed to do 
consulting in this area. Company Z has publicly stated that they will invest 2 
million dollars improving the public infrastructure, and plan to build a 
replacement for System Y. Some think that Edward is trying to take over System 
Y as a marketing exercise, others think System Y should have been replaced long 
ago, others think that Edward should just start funding Charlie and Debbie's 
work on System Y full time, instead of trying to build a new system from 
scratch. Charlie and Debbie are overwhelmed with all the emails and have less 
and less time to actually fix bugs in System Y. Next, Harold, who has been 
watching from the sidelines, posts a long tirade about all the reasons that 
Company Z is a terrible company doing the wrong things for the wrong reasons. 
Charlie barely knows Harold, but posts a small comment agreeing with the 
general sentiment. Edward sees the comment and promises himself that there is 
no way the ungrateful System Y people are ever getting any of his money. Two 
years later both System-Y and Company Z’s SystemY-Prime are in common use, do 
basically the same thing, and everyone grumbles about both.

The root problems here are differences in motivation, miscommunication, and the 
Internet Amplification Effect (IAE). Harsh posts in public forums are a surface 
effect that feeds back and exacerbates the underlying problems. People like 
Harold who stoke the flames don’t tend to read the Respectful Communication 
guidelines, and everyone always feels justified in their own opinions. There is 
published work on dealing with conflicts in online communities [1], but I don’t 
pretend to be an expert. 

Perhaps an interested party could start a wiki page with statements of the form 
“If you feel like X is happening then consider doing Y.” This might also help 
people that are not naturally good at understanding the thoughts and 
motivations of other people, and work better when such advice is written down.

Peace,
Ben.

[1] Managing Conflicts in Open Source Communities
Ruben Van Wendel De Joode, 2004.

___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


[Haskell] CFP: Haskell Symposium Regular Track Final Call

2015-05-17 Thread Ben Lippmeier
=
 ACM SIGPLAN  CALL FOR SUBMISSIONS
Haskell Symposium 2015

Vancouver, Canada, 3-4 September 2015, directly after ICFP
  http://www.haskell.org/haskell-symposium/2015
=

Reminder that the Haskell Symposium Regular Track
abstract deadline is this: Tuesday 19th of May,
with full papers due this: Friday  22nd of May.

Authors that have *already submitted to the early track*, 
have until 5th of June to resubmit an improved version of
those papers.

Deadlines stated are valid anywhere on earth.
(the HotCRP submission site states them in US EDT, but don't fret)

See the website for further details
http://www.haskell.org/haskell-symposium/2015

=


___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


[Haskell] Haskell 2015: 2nd Call for Papers

2015-03-04 Thread Ben Lippmeier
 columns. The length is restricted
to 12 pages, except for Experience Report papers, which are
restricted to 6 pages. Papers need not fill the page limit -- for
example, a Functional Pearl may be much shorter than 12 pages.
Each paper submission must adhere to SIGPLAN's republication policy,
as explained on the web.

Demo proposals are limited to 2-page abstracts, in the same ACM
format as papers.

Functional Pearls, Experience Reports, and Demo Proposals
should be marked as such with those words in the title at time of
submission.

The paper submission deadline and length limitations are firm.
There will be no extensions, and papers violating the length
limitations will be summarily rejected.

A link to the paper submission system will appear on the
Haskell Symposium web site closer to the submission deadline.


Submission Timetable:
=

  Early TrackRegular Track  System Demos
      ---  ---
13th March  Paper Submission
1st  MayNotification
19th May   Abstract Submission
22nd May   Paper Submission
5th  June   ResubmissionDemo Submission
26th June   Notification   Notification Notification
19th July   Final papers due   Final papers due

Deadlines stated are valid anywhere on earth.

In this iteration of the Haskell Symposium we are trialling a
two-track submission process, so that some papers can gain early
feedback. Papers can be submitted to the early track on 13th March.
On 1st May, strong papers are accepted outright, and the others will
be given their reviews and invited to resubmit. On 5th June early
track papers may be resubmitted, and are sent back to the same
reviewers. The Haskell Symposium regular track operates as in
previous years. Papers accepted via the early and regular tracks are
considered of equal value and will not be distinguished in the
proceedings.

Although all papers may be submitted to the early track, authors of
functional pearls and experience reports are particularly encouraged
to use this mechanism. The success of these papers depends heavily
on the way they are presented, and submitting early will give the
program committee a chance to provide feedback and help draw out
the key ideas.


Program Committee:
===

   Mathieu Boespflug- Tweag I/O
   Edwin Brady  - University of St Andrews
   Atze Dijkstra- Utrecht University
   Tom DuBuisson- Galois
   Torsten Grust- University of Tuebingen
   Patrik Jansson   - Chalmers University of Technology
   Patricia Johann  - Appalachian State University
   Oleg Kiselyov- Tohoku University
   Edward Kmett - McGraw Hill Financial
   Neelakantan Krishnaswami - University of Birmingham
   Ben Lippmeier (chair)- Vertigo Technology
   Hai (Paul) Liu   - Intel Labs
   Garrett Morris   - University of Edinburgh
   Dominic Orchard  - Imperial College London
   Matt Roberts - Macquarie University
   Tim Sheard   - Portland State University
   Joel Svensson- Indiana University
   Edsko de Vries   - Well Typed

=___
Haskell mailing list
Haskell@haskell.org
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell


[Haskell] CFP: Haskell Symposium 2015

2015-02-02 Thread Ben Lippmeier
 font in two columns. The length is restricted
to 12 pages, except for Experience Report papers, which are
restricted to 6 pages. Papers need not fill the page limit -- for
example, a Functional Pearl may be much shorter than 12 pages.
Each paper submission must adhere to SIGPLAN's republication policy,
as explained on the web.

Demo proposals are limited to 2-page abstracts, in the same ACM
format as papers.

Functional Pearls, Experience Reports, and Demo Proposals
should be marked as such with those words in the title at time of
submission.

The paper submission deadline and length limitations are firm.
There will be no extensions, and papers violating the length
limitations will be summarily rejected.

A link to the paper submission system will appear on the
Haskell Symposium web site closer to the submission deadline.


Submission Timetable:
=

   Early TrackRegular Track  System Demos
   ---  ---
13th March  Paper Submission
1st  MayNotification
19th May   Abstract Submission
22nd May   Paper Submission
 5th June   ResubmissionDemo Submission
26th June   Notification   Notification Notification
19th July   Final papers due   Final papers due

Deadlines stated are valid anywhere on earth.

In this iteration of the Haskell Symposium we are trialling a
two-track submission process, so that some papers can gain early
feedback. Papers can be submitted to the early track on 13th March.
On 1st May, strong papers are accepted outright, and the others will
be given their reviews and invited to resubmit. On 5th June early
track papers may be resubmitted, and are sent back to the same
reviewers. The Haskell Symposium regular track operates as in
previous years. Papers accepted via the early and regular tracks are
considered of equal value and will not be distinguished in the
proceedings.

Although all papers may be submitted to the early track, authors of
functional pearls and experience reports are particularly encouraged
to use this mechanism. The success of these papers depends heavily
on the way they are presented, and submitting early will give the
program committee a chance to provide feedback and help draw out
the key ideas.


Program Committee:
===

Mathieu Boespflug- Tweag I/O
Edwin Brady  - University of St Andrews
Atze Dijkstra- Utrecht University
Tom DuBuisson- Galois
Torsten Grust- University of Tuebingen
Patrik Jansson   - Chalmers University of Technology
Patricia Johann  - Appalachian State University
Oleg Kiselyov- Tohoku University
Edward Kmett - McGraw Hill Financial
Neelakantan Krishnaswami - University of Birmingham
Ben Lippmeier (chair)- Vertigo Technology
Hai (Paul) Liu   - Intel Labs
Garrett Morris   - University of Edinburgh
Dominic Orchard  - Imperial College London
Matt Roberts - Macquarie University
Tim Sheard   - Portland State University
Joel Svensson- Indiana University
Edsko de Vries   - Well Typed

=

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Compiler stops at SpecConstr optimization

2013-08-29 Thread Ben Lippmeier

On 30/08/2013, at 2:38 AM, Daniel Díaz Casanueva wrote:

 While hacking in one of my projects, one of my modules stopped to compile for 
 apparently no reason. The compiler just freezes (like if it where in an 
 infinite loop) while trying to compile that particular module. Since I had 
 this problem I have been trying to reduce the problem as much as I could, and 
 I came out with this small piece of code:
 
  module Blah (foo) where
 
  import Data.Vector (Vector)
  import qualified Data.Vector as V
 
  foo :: (a - a) - Vector a - Vector a
  foo f = V.fromList . V.foldl (\xs x - f x : xs) []

Probably an instance of this one:

http://ghc.haskell.org/trac/ghc/ticket/5550

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Problems with installing dph

2013-08-13 Thread Ben Lippmeier

On 13/08/2013, at 9:40 PM, Jan Clemens Gehrke wrote:

 Hi Glasgow-Haskell-Users, 
 
 I'm trying to get started with DPH and have some problems. 
 If i try getting DPH with 
 cabal install dph-examples 
 I get to warnings numerous times. The first warning is: 
 
 You are using a new version of LLVM that hasn't been tested yet! 
 We will try though... 
 
 and the second one: 
 
 Warning: vectorisation failure: identityConvTyCon: type constructor contains 
 parallel arrays [::] 
   Could NOT call vectorised from original version 

You can safely ignore this.


 Cabal finishes with: 
 
 Installing executable(s) in /home/clemens/.cabal/bin 
 Installed dph-examples-0.7.0.5 
 
 If i try compiling the first example from 
 http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell 
 with 
 ghc -c -Odph -fdph-par DotP.hs 
 i get 
 ghc: unrecognised flags: -fdph-par 

The wiki page is old and badly needs updating. We removed the -fdph-par flag 
about a year ago.

Check the dph-examples packages for the correct compiler flags to use, eg:

-eventlog -rtsopts -threaded -fllvm -Odph -package dph-lifted-vseg -fcpr-off 
-fsimpl-tick-factor=1000

Also note that DPH is still an experimental voyage into theoretical computer 
science. It should compile programs, and you should be able to run them, but 
they won't be fast enough to solve any of your actual problems.

Ben.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Proposal: NoImplicitPreludeImport

2013-05-28 Thread Ben Lippmeier

On 29/05/2013, at 9:02 AM, Ben Franksen wrote:

 Bryan O'Sullivan wrote:
 
 I have made a wiki page describing a new proposal,
 NoImplicitPreludeImport, which I intend to propose for Haskell 2014:
 
 http://hackage.haskell.org/trac/haskell-prime/wiki/NoImplicitPreludeImport
 
 What do you think?
 
 This is a truly terrible idea.
 
 It purports to be a step towards fixing the backwards compatibility
 problem, but of course it breaks every module ever written along the way,
 and it means that packages that try to be compatible across multiple
 versions of GHC will need mandatory CPP #ifdefs for years to come.
 
 I think it need not necessarily come to that. If we do this right, then 
 adding a line
 
  extensions: ImplicitPrelude

You could handle this more generally by implementing a compiler flag that 
causes modules to be imported.

We've already got  -package P for exposing packages, we could add -module M 
for exposing modules.

When compiling with a Haskell2014 compiler just add the -module Prelude flag 
to your Makefile/.cabal file.

Ben.


___
Haskell-prime mailing list
Haskell-prime@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-prime


Re: [Haskell-cafe] [haskell.org Google Summer of Code 2013] Approved Projects

2013-05-28 Thread Ben Lippmeier

On 29/05/2013, at 1:11 AM, Edward Kmett wrote:

 This unfortunately means, that we can't really show the unaccepted proposals 
 with information about how to avoid getting your proposal rejected.

You can if you rewrite the key points of proposal to retain the overall 
message, but remove identifying information. I think it would be helpful to 
write up some of the general reasons for projects being rejected.

I tried to do this for Haskell experience reports, on the Haskell Symposium 
experience report advice page.
 http://www.haskell.org/haskellwiki/HaskellSymposium/ExperienceReports


I'd imagine you could write up some common proposal / rejection / advice tuples 
like:

Proposal: I want to write a MMORPG in Haskell, because this would be a good 
demonstration for Haskell in a large real world project. We can use this as a 
platform to develop the networking library infrastructure.

Rejection: This project is much too big, and the production of a MMORPG 
wouldn't benefit the community as a whole.

Advice: If you know of specific problems in the networking library 
infrastructure, then focus on those, using specific examples of where people 
have tried to do something and failed.


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Haskell Symposium Experience Report Advice Page

2013-04-30 Thread Ben Lippmeier

Dear Haskell Hackers,

I have started an advice page for people that plan to submit experience reports 
to the upcoming Haskell Symposium, based on my experience as a PC member last 
year:

http://www.haskell.org/haskellwiki/HaskellSymposium/ExperienceReports

Haskell Symposium experience report acceptance rates are typically lower than 
for full papers, and it would be good to improve this. Please add any comments, 
links or insights you may have to the above page.

If you are planning to submit an experience report... then also read the page! 
:-)

Ben.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Ben Lippmeier

On 25/04/2013, at 3:47 AM, Duncan Coutts wrote:

 It looks like fold and unfold fusion systems have dual limitations:
 fold-based fusion cannot handle zip style functions, while unfold-based
 fusion cannot handle unzip style functions. That is fold-based cannot
 consume multiple inputs, while unfold-based cannot produce multiple
 outputs.

Yes. This is a general property of data-flow programs and not just compilation 
via Data.Vector style co-inductive stream fusion, or a property of 
fold/unfold/hylo fusion. 


Consider these general definitions of streams and costreams.

-- A stream must always produce an element.
type Stream a   = IO a

-- A costream must always consume an element.
type CoStream a = a - IO ()


And operators on them (writing S for Stream and C for CoStream).

-- Versions of map.
map :: (a - b) - S a - S b(ok)
comap   :: (a - b) - C b - C a(ok)

-- Versions of unzip.
unzip   :: S (a, b) - (S a, S b)(bad)
counzip :: C a - C b - C (a, b)(ok)
unzipc  :: S (a, b) - C b - S a(ok)

-- Versions of zip.
zip :: S a - S b - S (a, b)(ok)
cozip   :: C (a, b) - (C a, C b)(bad)
zipc:: C (a, b) - S a - C b(ok)



The operators marked (ok) can be implemented without buffering data, while the 
combinators marked (bad) may need an arbitrary sized buffer.

Starting with 'unzip', suppose we pull elements from the first component of the 
result (the (S a)) but not the second component (the (S b)). To provide these 
'a' elements, 'unzip' must pull tuples from its source stream (S (a, b)) and 
buffer the 'b' part until someone pulls from the (S b).

Dually, with 'cozip', suppose we push elements into the first component of the 
result (the (C a)). The implementation must buffer them until someone pushes 
the corresponding element into the (C b), only then can it push the whole tuple 
into the source (C (a, b)) costream.


The two combinators unzipc and zipc are hybrids:

For 'unzipc', if we pull an element from the (S a), then the implementation can 
pull a whole (a, b) tuple from the source (S (a, b)) and then get rid of the 
'b' part by pushing it into the (C b). The fact that it can get rid of the 'b' 
part means it doesn't need a buffer.

Similarly, for 'zipc', if we push a 'b' into the (C b) then the implementation 
can pull the corresponding 'a' part from the (S a) and then push the whole (a, 
b) tuple into the C (a, b). The fact that it can get the corresponding 'a' 
means it doesn't need a buffer.

I've got some hand drawn diagrams of this if anyone wants them (mail me), but 
none of it helps implement 'unzip' for streams or 'cozip' for costreams. 



 I'll be interested to see in more detail the approach that Ben is
 talking about. As Ben says, intuitively the problem is that when you've
 got multiple outputs so you need to make sure that someone is consuming
 them and that that consumption is appropriately synchronised so that you
 don't have to buffer (buffering would almost certainly eliminate the
 gains from fusion). That might be possible if ultimately the multiple
 outputs are combined again in some way, so that overall you still have a
 single consumer, that can be turned into a single lazy or eager loop.


At least for high performance applications, I think we've reached the limit of 
what short-cut fusion approaches can provide. By short cut fusion, I mean 
crafting a special source program so that the inliner + simplifier + 
constructor specialisation transform can crunch down the intermediate code into 
a nice loop. Geoff Mainland's recent paper extended stream fusion with support 
for SIMD operations, but I don't think stream fusion can ever be made to fuse 
programs with unzip/cozip-like operators properly. This is a serious problem 
for DPH, because the DPH vectoriser naturally produces code that contains these 
operators.

I'm currently working on Repa 4, which will include a GHC plugin that hijacks 
the intermediate GHC core code and performs the transformation described in 
Richard Water's paper Automatic transformation of series expressions into 
loops. The plugin will apply to stream programs, but not affect the existing 
fusion mechanism via delayed arrays. I'm using a cut down 'clock calculus' from 
work on synchronous data-flow languages to guarantee that all outputs from an 
unzip operation are consumed in lock-step. Programs that don't do this won't be 
well typed. Forcing synchronicity guarantees that Waters's transform will apply 
to the program.

The Repa plugin will also do proper SIMD vectorisation for stream programs, 
producing the SIMD primops that Geoff recently added. Along the way it will 
brutally convert all operations on boxed/lifted numeric data to their unboxed 
equivalents, because I am sick of adding bang patterns to every single function 
parameter in Repa programs. 

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-25 Thread Ben Lippmeier

On 26/04/2013, at 2:15 PM, Johan Tibell wrote:

 Hi Ben,
 
 On Thu, Apr 25, 2013 at 7:46 PM, Ben Lippmeier b...@ouroborus.net wrote:
 The Repa plugin will also do proper SIMD vectorisation for stream programs, 
 producing the SIMD primops that Geoff recently added. Along the way it will 
 brutally convert all operations on boxed/lifted numeric data to their 
 unboxed equivalents, because I am sick of adding bang patterns to every 
 single function parameter in Repa programs.
 
 How far is this plugin from being usable to implement a
 
 {-# LANGUAGE Strict #-}
 
 pragma for treating a single module as if Haskell was strict?

There is already one that does this, but I haven't used it.

http://hackage.haskell.org/package/strict-ghc-plugin

It's one of the demo plugins, though you need to mark individual functions 
rather than the whole module (which would be straightforward to add).

The Repa plugin is only supposed to munge functions using the Repa library, 
rather than the whole module.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-22 Thread Ben Lippmeier

On 22/04/2013, at 5:27 PM, Edward Z. Yang wrote:

 So, if I understand correctly, you're using the online/offline
 criterion to resolve non-directed cycles in pipelines?  (I couldn't
 tell how the Shivers paper was related.)

The online criteria guarantees that the stream operator does not need to 
buffer an unbounded amount of data (I think). 

I'm not sure what you mean by resolve non-directed cycles.

The Shivers paper describes the same basic approach of splitting the code for a 
stream operator in to parts that run before the loop/for each element of a 
loop/after the loop etc. Splitting multiple operators this way and then merging 
the parts into a single loop provides the concurrency required by the 
description in John Hughes's thesis.
 
Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Ben Lippmeier

On 22/04/2013, at 11:07 , Edward Z. Yang ezy...@mit.edu wrote:

 Hello all, (cc'd stream fusion paper authors)
 
 I noticed that the current implementation of stream fusion does
 not support multiple-return stream combinators, e.g.
 break :: (a - Bool) - [a] - ([a], [a]).  I thought a little
 bit about how might one go about implement this, but the problem
 seems nontrivial. (One possibility is to extend the definition
 of Step to support multiple return, but the details are a mess!)
 Nor, as far as I can tell, does the paper give any treatment of
 the subject.  Has anyone thought about this subject in some detail?


I've spent the last few months fighting this exact problem.

The example you state is one instance of a more general limitation. Stream 
fusion (and most other short-cut fusion approaches) cannot fuse a producer into 
multiple consumers. The fusion systems don't support any unzip-like function, 
where elements from the input stream end up in multiple output streams. For 
example:

unzip :: [(a, b)] - ([a], [b])

dup   :: [a] - ([a], [a])

The general problem is that if elements of one output stream are demanded 
before the other, then the stream combinator must buffer elements until they 
are demanded by both outputs.

John Hughes described this problem in his thesis, and gave an informal proof 
that it cannot be solved without some form of concurrency -- meaning the 
evaluation of the two consumers must be interleaved.

I've got a solution for this problem and it will form the basis of Repa 4, 
which I'm hoping to finish a paper about for  the upcoming Haskell Symposium.

Ben.










___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stream fusion and span/break/group/init/tails

2013-04-21 Thread Ben Lippmeier

On 22/04/2013, at 12:23 , Edward Z. Yang ezy...@mit.edu wrote:

 I've got a solution for this problem and it will form the basis of
 Repa 4, which I'm hoping to finish a paper about for  the upcoming
 Haskell Symposium.
 
 Sounds great! You should forward me a preprint when you have something
 in presentable shape. I suppose before then, I should look at 
 repa-head/repa-stream
 to figure out what the details are?

The basic approach is already described in:

Automatic Transformation of Series Expressions into Loops
Richard Waters, TOPLAS 1991

The Anatomy of a Loop
Olin Shivers, ICFP 2005


The contribution of the HS paper is planning to be:
 1) How to extend the approach to the combinators we need for DPH
 2) How to package it nicely into a Haskell library.

I'm still working on the above...

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Nomyx 0.1 beta, the game where you can change the rules

2013-02-26 Thread Ben Lippmeier

On 27/02/2013, at 10:28 , Corentin Dupont corentin.dup...@gmail.com wrote:

 Hello everybody!
 I am very happy to announce the beta release [1] of Nomyx, the only game 
 where You can change the rules.

Don't forget 1KBWC: http://www.corngolem.com/1kbwc/

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: GHC 7.8 release?

2013-02-07 Thread Ben Lippmeier

On 08/02/2013, at 5:15 AM, Simon Peyton-Jones wrote:

 So perhaps we principally need a way to point people away from GHC and 
 towards HP?  eg We could prominently say at every download point “Stop!  Are 
 you sure you want this?  You might be better off with the Haskell Platform!  
 Here’s why...”.

Right now, the latest packages uploaded to Hackage get built with ghc-7.6 
(only), and all the pages say Built on ghc-7.6. By doing this we force *all* 
library developers to run GHC 7.6. I think this sends the clearest message 
about what the real GHC version is. 

We'd have more chance of turning Joe User off the latest GHC release if Hackage 
was clearly split into stable/testing channels. Linux distros have been doing 
this for years.

Ben.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] ANN: Disciplined Disciple Compiler (DDC) 0.3.1

2012-12-23 Thread Ben Lippmeier

The Disciplined Disciple Compiler Strike Force is pleased to announce the 
release of DDC 0.3.1. 

DDC is a research compiler used to investigate program transformation in the 
presence of computational effects. This is a development release. There is 
enough implemented to experiment with the core language, but not enough to 
write real programs.

New Features

 * Compilation via C and LLVM for first-order programs.
 * Cross-module inlining.
 * An effect-aware rewrite rule framework.
 * Generation of LLVM aliasing and constancy meta-data.
 * More program transformations:
Beta (substitute), Bubble (move type-casts), Elaborate (add witnesses),
Flatten (eliminate nested bindings), Forward (let-floating),
Namify (add names), Prune (dead-code elimination), Snip (eliminate nested 
applications).


People
~~
 The following people contributed to DDC since the last release:
 Tran Ma- LLVM aliasing and constancy meta-data.
 Amos Robinson  - Rewrite rule system and program transforms.
 Erik de Castro Lopo- Build framework.
 Ben Lippmeier  - Code generators, framework, program transforms.


Full release notes: 
  http://code.ouroborus.net/ddc/ddc-stable/RELEASE

Further reading:
  http://disciple.ouroborus.net/

For the impatient:
  cabal update; cabal install ddc-tools







___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: Does GHC still support x87 floating point math?

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 12:12 , Johan Tibell wrote:

 I'm currently trying to implement word2Double#. Other such primops
 support both x87 and sse floating point math. Do we still support x87
 fp math? Which compiler flag enables it?

It's on by default unless you use the -sse2 flag. The x87 support is horribly 
slow though. I don't think anyone would notice if you deleted the x87 code and 
made SSE the default, especially now that we have the LLVM code generator. SSE 
has been the way to go for over 10 years now.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: The end of an era, and the dawn of a new one

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 3:56 , Simon Peyton-Jones wrote:

 Particularly valuable are offers to take responsibility for a
 particular area (eg the LLVM code generator, or the FFI).  I'm
 hoping that this sea change will prove to be quite empowering,
 with GHC becoming more and more a community project, more
 resilient with fewer single points of failure. 

The LLVM project has recently come to the same point. The codebase has become 
too large for Chris Lattner to keep track of it all, so they've moved to a 
formal Code Ownership model. People own particular directories of the code 
base, and the code owners are expected to review patches for those directories.

The GHC project doesn't have a formal patch review process, I think because the 
people with commit access on d.h.o generally know who owns what. Up until last 
week I think it was SPJ owns the type checker and simplifier, and SM owns 
everything else. :-)

At this stage, I think it would help if we followed the LLVM approach of having 
a formal CODE_OWNERS file in the root path of the repo explicitly listing the 
code owners. That way GHC HQ knows what's covered and what still needs a 
maintainer. The LLVM version is here [1].

Code owners would:
1) Be the go-to person when other developers have questions about that code.
2) Fix bugs in it that no-one else has claimed.
3) Generally keep the code tidy, documented and well-maintained.

Simon: do you want a CODE_OWNERS file? If so then I can start it. I think it's 
better to have it directly in the repo than on the wiki, that way no-one that 
works on the code can miss it.

I suppose I'm the default owner of the register allocators and non-LLVM native 
code generators.

Ben.

[1] http://llvm.org/viewvc/llvm-project/llvm/trunk/CODE_OWNERS.TXT?view=markup




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: The end of an era, and the dawn of a new one

2012-12-06 Thread Ben Lippmeier

On 07/12/2012, at 4:21 , Ian Lynagh wrote:

 On Thu, Dec 06, 2012 at 09:56:55PM +1100, Ben Lippmeier wrote:
 
 I suppose I'm the default owner of the register allocators and non-LLVM 
 native code generators.
 
 Great, thanks!
 
 By the way, if you feel like doing some hacking this holiday season,
 then you might be interested in
http://hackage.haskell.org/trac/ghc/ticket/7063

Ah, holidays. Finally I'll have time to get some work done... :-)

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell] The end of an era, and the dawn of a new one

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 3:56 , Simon Peyton-Jones wrote:

 Particularly valuable are offers to take responsibility for a
 particular area (eg the LLVM code generator, or the FFI).  I'm
 hoping that this sea change will prove to be quite empowering,
 with GHC becoming more and more a community project, more
 resilient with fewer single points of failure. 

The LLVM project has recently come to the same point. The codebase has become 
too large for Chris Lattner to keep track of it all, so they've moved to a 
formal Code Ownership model. People own particular directories of the code 
base, and the code owners are expected to review patches for those directories.

The GHC project doesn't have a formal patch review process, I think because the 
people with commit access on d.h.o generally know who owns what. Up until last 
week I think it was SPJ owns the type checker and simplifier, and SM owns 
everything else. :-)

At this stage, I think it would help if we followed the LLVM approach of having 
a formal CODE_OWNERS file in the root path of the repo explicitly listing the 
code owners. That way GHC HQ knows what's covered and what still needs a 
maintainer. The LLVM version is here [1].

Code owners would:
1) Be the go-to person when other developers have questions about that code.
2) Fix bugs in it that no-one else has claimed.
3) Generally keep the code tidy, documented and well-maintained.

Simon: do you want a CODE_OWNERS file? If so then I can start it. I think it's 
better to have it directly in the repo than on the wiki, that way no-one that 
works on the code can miss it.

I suppose I'm the default owner of the register allocators and non-LLVM native 
code generators.

Ben.

[1] http://llvm.org/viewvc/llvm-project/llvm/trunk/CODE_OWNERS.TXT?view=markup




___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] The end of an era, and the dawn of a new one

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 3:56 , Simon Peyton-Jones wrote:

 Particularly valuable are offers to take responsibility for a
 particular area (eg the LLVM code generator, or the FFI).  I'm
 hoping that this sea change will prove to be quite empowering,
 with GHC becoming more and more a community project, more
 resilient with fewer single points of failure. 

The LLVM project has recently come to the same point. The codebase has become 
too large for Chris Lattner to keep track of it all, so they've moved to a 
formal Code Ownership model. People own particular directories of the code 
base, and the code owners are expected to review patches for those directories.

The GHC project doesn't have a formal patch review process, I think because the 
people with commit access on d.h.o generally know who owns what. Up until last 
week I think it was SPJ owns the type checker and simplifier, and SM owns 
everything else. :-)

At this stage, I think it would help if we followed the LLVM approach of having 
a formal CODE_OWNERS file in the root path of the repo explicitly listing the 
code owners. That way GHC HQ knows what's covered and what still needs a 
maintainer. The LLVM version is here [1].

Code owners would:
1) Be the go-to person when other developers have questions about that code.
2) Fix bugs in it that no-one else has claimed.
3) Generally keep the code tidy, documented and well-maintained.

Simon: do you want a CODE_OWNERS file? If so then I can start it. I think it's 
better to have it directly in the repo than on the wiki, that way no-one that 
works on the code can miss it.

I suppose I'm the default owner of the register allocators and non-LLVM native 
code generators.

Ben.

[1] http://llvm.org/viewvc/llvm-project/llvm/trunk/CODE_OWNERS.TXT?view=markup




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Are there REPA linear algebra routines? e.g. Eigenvalues?

2012-12-06 Thread Ben Lippmeier

On 06/12/2012, at 3:18 , KC wrote:

 :)

Not apart from the matrix-matrix multiply code in repa-algorithms. If you 
wanted to write some I'd be happy to fold them into repa-algorithms.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: GHC Performance Tsar

2012-12-04 Thread Ben Lippmeier

On 01/12/2012, at 1:42 AM, Simon Peyton-Jones wrote:

 |  While writing a new nofib benchmark today I found myself wondering
 |  whether all the nofib benchmarks are run just before each release,
 
 I think we could do with a GHC Performance Tsar.  Especially now that Simon 
 has changed jobs, we need to try even harder to broaden the base of people 
 who help with GHC.  It would be amazing to have someone who was willing to:
 
 * Run nofib benchmarks regularly, and publish the results
 
 * Keep baseline figures for GHC 7.6, 7.4, etc so we can keep
   track of regressions
 
 * Investigate regressions to see where they come from; ideally
   propose fixes.
 
 * Extend nofib to contain more representative programs (as Johan is
   currently doing).
 
 That would help keep us on the straight and narrow.  


I was running a performance regression buildbot for a while a year ago, but 
gave it up because I didn't have time to chase down the breakages. At the time 
we were primarily worried about the asymptotic performance of DPH, and fretting 
about a few percent absolute performance was too much of a distraction. 

However: if someone wants to pick this up then they may get some use out of the 
code I wrote for it. The dph-buildbot package in the DPH repository should 
still compile. This package uses 
http://hackage.haskell.org/package/buildbox-1.5.3.1 which includes code for 
running tests, collecting the timings, comparing against a baseline, making 
pretty reports etc. There is then a second package buildbox-tools which has a 
command line tool for listing the benchmarks that have deviated from the 
baseline by a particular amount.

Here is an example of a report that dph-buildbot made: 

http://log.ouroborus.net/limitingfactor/dph/nightly-20110809_000147.txt

Ben.




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Reaching Max Bolingbroke

2012-11-18 Thread Ben Lippmeier

On 19/11/2012, at 24:40 , Roman Cheplyaka wrote:

 For the last two months I've been trying to reach Max Bolingbroke via
 his hotmail address, github and linkedin, but did not succeed.
 
 Does anyone know if he's well? If someone could help by telling him that
 I'd like to get in touch, I'd appreciate that.

He wasn't at ICFP either. I think SPJ said he was in the middle of writing up 
his PhD thesis.

When I was doing mine I was out of circulation for a good 3 months.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: parallel garbage collection performance

2012-06-18 Thread Ben Lippmeier

On 19/06/2012, at 24:48 , Tyson Whitehead wrote:

 On June 18, 2012 04:20:51 John Lato wrote:
 Given this, can anyone suggest any likely causes of this issue, or
 anything I might want to look for?  Also, should I be concerned about
 the much larger gc_alloc_block_sync level for the slow run?  Does that
 indicate the allocator waiting to alloc a new block, or is it
 something else?  Am I on completely the wrong track?
 
 A total shot in the dark here, but wasn't there something about really bad 
 performance when you used all the CPUs on your machine under Linux?
 
 Presumably very tight coupling that is causing all the threads to stall 
 everytime the OS needs to do something or something?

This can be a problem for data parallel computations (like in Repa). In Repa 
all threads in the gang are supposed to run for the same time, but if one gets 
swapped out by the OS then the whole gang is stalled.

I tend to get best results using -N7 for an 8 core machine. 

It is also important to enable thread affinity (with the -qa) flag. 

For a Repa program on an 8 core machine I use +RTS -N7 -qa -qg

Ben.



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: parallel garbage collection performance

2012-06-18 Thread Ben Lippmeier

On 19/06/2012, at 10:59 , Manuel M T Chakravarty wrote:

 I wonder, do we have a Repa FAQ (or similar) that explain such issues? (And 
 is easily discoverable?)

I've been trying to collect the main points in the haddocs for the main module 
[1], but this one isn't there yet.

I need to update the Repa tutorial, on the Haskell wiki, and this should also 
go in it

Ben.

[1] 
http://hackage.haskell.org/packages/archive/repa/3.2.1.1/doc/html/Data-Array-Repa.html


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: parallel garbage collection performance

2012-06-18 Thread Ben Lippmeier

On 19/06/2012, at 13:53 , Ben Lippmeier wrote:

 
 On 19/06/2012, at 10:59 , Manuel M T Chakravarty wrote:
 
 I wonder, do we have a Repa FAQ (or similar) that explain such issues? (And 
 is easily discoverable?)
 
 I've been trying to collect the main points in the haddocs for the main 
 module [1], but this one isn't there yet.
 
 I need to update the Repa tutorial, on the Haskell wiki, and this should also 
 go in it


I also added thread affinity to the Repa FAQ [1].

Ben.

[1] http://repa.ouroborus.net/




___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Is Repa suitable for boxed arrays?...

2012-06-03 Thread Ben Lippmeier

On 03/06/2012, at 18:10 , Stuart Hungerford wrote:

 I need to construct a 2D array of a Haskell  data type (boxed ?)
 where each array entry value depends on values earlier in the same
 array (i.e. for entries in previous row/column indexes).

It should work. Use the V type-index for boxed arrays [1], so your array type 
will be something like (Array V DIM2 Float)

If you can't figure it out then send me a small list program showing what you 
want to do.


 Repa (V3.1.4.2) looks very powerful and flexible but it's still not
 clear to me that it will work with arbitrary values as I haven't been
 able to get any of the Wiki tutorial array creation examples to work
 (this is with Haskell platform 2012.2 pre-release for OS/X).

The wiki tutorial is old. It was written for the Repa 2 series, but Repa 3 is 
different. However I just (just) submitted a paper on Repa 3 to Haskell 
Symposium, which might help [2]

[1] 
http://hackage.haskell.org/packages/archive/repa/3.1.4.2/doc/html/Data-Array-Repa-Repr-Vector.html
[2] http://www.cse.unsw.edu.au/~benl/papers/guiding/guiding-Haskell2012-sub.pdf


Ben.




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi runtime linker: fatal error (was Installing REPA)

2012-04-09 Thread Ben Lippmeier

On 08/04/2012, at 2:41 AM, Dominic Steinitz wrote:
 Hi Ben, Chris and Others,
 
 Thanks for your replies and suggestions. All I want to do is invert (well 
 solve actually) a tridiagonal matrix so upgrading ghc from the version that 
 comes with the platform seems a bit overkill. I think I will go with Chris' 
 suggestion for now and maybe upgrade ghc (and REPA) when I am feeling braver.
 
 Dominic.
 Sadly I now get this when trying to mulitply two matrices. Is this because I 
 have two copies of Primitive? I thought Cabal was supposed to protect me from 
 this sort of occurrence. Does anyone have any suggestions on how to solve 
 this?

You'll need to upgrade. Trying to support old versions of software is a lost 
cause.

I pushed Repa 3.1 to Hackage on the weekend. It has a *much* cleaner API. I 
can't recommend continuing to use Repa 2. You will just run into all the 
problems that are now fixed in Repa 3. 

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing REPA

2012-04-07 Thread Ben Lippmeier

On 07/04/2012, at 9:33 AM, Chris Wong wrote:

 On Sat, Apr 7, 2012 at 2:02 AM, Dominic Steinitz
 idontgetoutm...@googlemail.com wrote:
 Hi,
 
 I'm trying to install REPA but getting the following. Do I just install
 base? Or is it more complicated than that?
 
 Thanks, Dominic.
 
 I think the easiest solution is to just use an older version of Repa.
 According to Hackage, the latest one that works with base 4.3 is Repa
 2.1.1.3:
 
 $ cabal install repa==2.1.1.3

I've just pushed Repa 3 onto Hackage, which has a much better API than the 
older versions, and solves several code fusion problems. However, you'll need 
to upgrade to GHC 7.4 to use it. GHC 7.0.3 is two major releases behind the 
current version.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing REPA

2012-04-07 Thread Ben Lippmeier

On 07/04/2012, at 21:38 , Peter Simons wrote:

 Hi Ben,
 
 I've just pushed Repa 3 onto Hackage, which has a much better API
 than the older versions, and solves several code fusion problems.
 
 when using the latest version of REPA with GHC 7.4.1, I have trouble
 building the repa-examples package:
 
 | Building repa-examples-3.0.0.1...
 | Preprocessing executable 'repa-volume' for repa-examples-3.0.0.1...

 When I attempt to use repa 3.1.x, the build won't even get past the
 configure stage, because Cabal refuses these dependencies. Is that a
 known problem, or am I doing something wrong?

It is a conjunction of tedious Cabal and Hackage limitations, as well as my 
failure to actually upload the new repa-examples package.

Please try again now, and if that doesn't work email be the output of:

$ cabal update
$ cabal install repa-examples
$ ghc-pkg list

Thanks,
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: What do the following numbers mean?

2012-04-02 Thread Ben Lippmeier

On 02/04/2012, at 10:10 PM, Jurriaan Hage wrote:
 Can anyone tell me what the exact difference is between
  1,842,979,344 bytes maximum residency (219 sample(s))
 and
   4451 MB total memory in use (0 MB lost due to fragmentation)
 
 I could not find this information in the docs anywhere, but I may have missed 
 it.

The maximum residency is the peak amount of live data in the heap. The total 
memory in use is the peak amount that the GHC runtime requested from the 
operating system. Because the runtime system ensures that the heap is always 
bigger than the size of the live data, the second number will be larger.

The maximum residency is determined by performing a garbage collection, which 
traces out the graph of live objects. This means that the number reported may 
not be the exact peak memory use of the program, because objects could be 
allocated and then become unreachable before the next sample. If you want a 
more accurate number then increase the frequency of the heap sampling with the 
-isec RTS flag.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


[Haskell] ANNOUNCE: Disciple Core Interpreter 0.2.1

2012-02-22 Thread Ben Lippmeier

The Disciplined Disciple Compiler (DDC) is being stripped down, cleaned and 
rebuilt with 100% less known bugs and unfortunate holes. The first pieces are 
now ready for human consumption, namely a new core language and interpreter for 
it. 

There is a tutorial including Hackage links here:
  http://disciple.ouroborus.net/wiki/Tutorial/Core

Read more about the project on the wiki: 
  http://disciple.ouroborus.net/

Cheers,
Ben.


___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Error in installing dph-examples on Mac OS X 10.7.3

2012-02-09 Thread Ben Lippmeier

On 10/02/2012, at 6:12 AM, mukesh tiwari wrote:

 Hello all 
 I am trying to install dph-examples on Mac OS X version 10.7.3 but getting 
 this error. I am using ghc-7.4.1.


This probably isn't DPH specific. Can you compile a hello world program with 
-fllvm?

Ben.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Ben Lippmeier

On 07/02/2012, at 7:00 AM, Clark Gaebel wrote:

 Using the OpenGL package on Hackage, how do I load a texture from an array?
 
 In the red book[1], I see their code using glGenTextures and glBindTexture, 
 but I can't find these in the documentation. Are there different functions I 
 should be calling?

The Gloss graphics library has texture support, and the code for drawing them 
is confined to this module:

http://code.ouroborus.net/gloss/gloss-head/gloss/Graphics/Gloss/Internals/Render/Picture.hs

Feel free to steal the code from there.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Ben Lippmeier

On 07/02/2012, at 2:40 PM, Clark Gaebel wrote:

 Awesome. Thanks!
 
 As a follow up question, how do I add a finalizer to a normal variable? 
 OpenGL returns an integer handle to your texture in graphics memory, and you 
 have to call deleteObjectNames on it. Is there any way to have this 
 automatically run once we lose all references to this variable (and all 
 copies)?

I don't know. I've only used ForeignPtrs with finalisers before [1].

One problem with these finalisers is that GHC provides no guarantees on when 
they will be run. It might be just before the program exits, instead of when 
the pointer actually becomes unreachable. Because texture memory is a scarce 
resource, I wouldn't want to rely on a finaliser to free it -- though I suppose 
this depends on what you're doing.

Ben.

[1] 
http://www.haskell.org/ghc/docs/latest/html/libraries/haskell2010-1.1.0.1/Foreign-ForeignPtr.html


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Loading a texture in OpenGL

2012-02-06 Thread Ben Lippmeier

On 07/02/2012, at 2:50 PM, Clark Gaebel wrote:

 I would be running the GC manually at key points to make sure it gets cleaned 
 up. Mainly, before any scene changes when basically everything gets thrown 
 out anyways.


From the docs:

newForeignPtr :: FinalizerPtr a - Ptr a - IO (ForeignPtr a)Source
Turns a plain memory reference into a foreign pointer, and associates a 
finalizer with the reference. The finalizer will be executed after the last 
reference to the foreign object is dropped. There is no guarantee of 
promptness, however the finalizer will be executed before the program exits.


No guarantee of promptness. Even if the GC knows your pointer is unreachable, 
it might choose not to call the finaliser. I think people have been bitten by 
this before.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Compiling dph package with ghc-7.4.0.20111219

2012-01-22 Thread Ben Lippmeier

On 21/01/2012, at 22:47 , mukesh tiwari wrote:

 Hello all 
 I have installed ghc-7.4.0.20111219  and this announcement says that The 
 release candidate accidentally includes the random, primitive, vector and dph 
 libraries. The final release will not include them. I tried to compile  a 
 program 
 
 [ntro@localhost src]$ ghc-7.4.0.20111219 -c -Odph -fdph-par ParallelMat.hs 
 ghc: unrecognised flags: -fdph-par
 Usage: For basic information, try the `--help' option.
 [ntro@localhost src]$ ghc-7.2.1 -c -Odph -fdph-par ParallelMat.hs  

The -fdph-par flag doesn't exist anymore, but we haven't had a chance to update 
the wiki yet. Use -package dph-lifted-vseg to select the backend. You could 
also look at the cabal file for the dph-examples package to see what flags we 
use when compiling.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

On 20/12/2011, at 6:06 PM, Roman Cheplyaka wrote:

 * Alexander Solla alex.so...@gmail.com [2011-12-19 19:10:32-0800]
 * Documentation that discourages thinking about bottom as a 'value'.  It's
 not a value, and that is what defines it.
 
 In denotational semantics, every well-formed term in the language must
 have a value. So, what is a value of fix id?

There isn't one!

Bottoms will be the null pointers of the 2010's, you watch.
 
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

On 20/12/2011, at 9:06 PM, Thiago Negri wrote:
 There isn't one!
 
 Bottoms will be the null pointers of the 2010's, you watch.


 How would you represent it then?

Types probably. In C, the badness of null pointers is that when you inspect an  
int*  you don't always find an int. Of course the superior Haskell solution is 
to use algebraic data types, and represent a possibly exceptional integer by 
Maybe Int. But then when you inspect a Maybe Int you don't always get an .. 
ah.


 Would it cause a compiler error?


Depends whether you really wanted an Int or not.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

  In denotational semantics, every well-formed term in the language must
  have a value. So, what is a value of fix id?
 
 There isn't one!
 
 Bottoms will be the null pointers of the 2010's, you watch.
 
 This ×1000. Errors go in an error monad.
 
 Including all possible manifestations of infinite loops?

Some would say that non-termination is a computational effect, and I can argue 
either way depending on the day of the week.

Of course, the history books show that monads were invented *after* it was 
decided that Haskell would be a lazy language. Talk about selection bias.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If you'd design a Haskell-like language, what would you do different?

2011-12-20 Thread Ben Lippmeier

On 20/12/2011, at 21:52 , Gregory Crosswhite wrote:
 
 Some would say that non-termination is a computational effect, and I can 
 argue either way depending on the day of the week.
 
 *shrug*  I figure that whether you call _|_ a value is like whether you 
 accept the Axiom of Choice:  it is a situational decision that depends on 
 what you are trying to learn more about.

I agree, but I'd like to have more control over my situation. Right now we have 
boxed and lifted Int, and unboxed and unlifted Int#, but not the boxed and 
unlifted version, which IMO is usually what you want.


 Of course, the history books show that monads were invented *after* it was 
 decided that Haskell would be a lazy language. Talk about selection bias.
 
 True, but I am not quite sure how that is relevant to _|_...

I meant to address the implicit question why doesn't Haskell use monads to 
describe non-termination already. The answer isn't necessarily because it's 
not a good idea, it's because that wasn't an option at the time.


 Dec 20, 2011, в 14:40, Jesse Schalken jesseschal...@gmail.com написал(а):


 Including all possible manifestations of infinite loops?
 
 So... this imaginary language of yours would be able to solve the halting 
 problem?

All type systems are incomplete. The idea is to do a termination analysis, and 
if the program can not be proved to terminate, then it is marked as possibly 
non-terminating. This isn't the same as deciding something is *definitely* 
non-terminating, which is what the halting problem is about. This possibly 
non-terminating approach is already used by Coq, Agda and other languages.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] (Repa) hFlush: illegal operation

2011-08-24 Thread Ben Lippmeier

On 25/08/2011, at 7:15 , Michael Orlitzky wrote:

 I'm using Repa to process a ton of MRI data. The basic process is,
 
  * Read in the data
  * Create a big 'ol data structure (grid) from it
  * Compute the output in parallel using 'traverse'
  * Write the output to file
 
 However, during the last step, I'm getting,
 
  $ ./bin/spline3 +RTS -N4
  spline3: output.txt: hFlush: illegal operation (handle is closed)


 read_values_1d :: FilePath - IO Values1D
 read_values_1d path = readVectorFromTextFile path

The implementation of the text IO functions is fairly naive, just using Haskell 
Strings etc under the covers. It may have problems with massive files. 

Can you send me some gzipped data of the same size/form as what you're using, 
or tell me where to download it? Even if your real source data is several GB in 
size, if you make a test file with mostly zeros it should gzip down to nothing.

Also, what is the native form of the data you are using? If it's in some 
standard binary form it may just be easier to write a native Repa loader for it.

Cheers,
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Build failure of syb-with-class with ghc-7.2.1

2011-08-09 Thread Ben Lippmeier

On 09/08/2011, at 23:15 , Sergei Trofimovich wrote:

 the HEAD of syb-with-class fails with the following error when build
 with ghc-7.2.1 and template-haskell-2.6:
 
 http://code.google.com/p/syb-with-class/issues/detail?id=4
 
 Is this a bug in TH?
 
 Very likely:
http://hackage.haskell.org/trac/ghc/ticket/5362

In TH code you now need to use mkName at variable uses instead of the names 
created directly with newName. Repa had a similar problem.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] logic and types

2011-07-31 Thread Ben Lippmeier

On 01/09/2011, at 8:48 , Patrick Browne wrote:

 Hi,
 Below are some questions about the logical interpretation of types and
 type classes.
 
 Thanks,
 Pat
 
 module J where
 type Id = String
 type Position = Integer
 data Person = Person Id Position deriving Show
 
 -- Is this an axiom at type level?
 class Pos a  where
   getPos :: a - Position

One way to think of a type class is that it defines a set of types. For 
example, Eq is the set of types that support equality, and Pos is the set of 
types that have a position. By giving the class definition you've defined what 
it means to be a member of that set, namely that members must support the 
'getPos' method, but without instances that set is empty. Whether you treat 
this bare class definition as an axiom depends on what you want from your 
logical system. 


 -- The :type command says
 -- forall a. (Pos a) = a - Position
 -- How do I write this in logic? (e.g. implies, and, or, etc)

Type systems are logical systems, there is no difference. Granted, some systems 
correspond to parts of others, but there is no single logical system that can 
be considered to be *the logic*. An equivalent question would be: how do I 
write this in functional programming?


 -- What exactly is being asserted about the type variable and/or about
 the class?

If you ignore the possibility that the function could diverge, then it says 
For all types a, given that 'a' is a member of the set Pos, and given a value 
of type 'a', then we can construct a Position.

Note that this doesn't guarantee that there are any types 'a' that are members 
of Pos. In Haskell you can define a type class, but not give instances for it, 
and still write functions using the type class methods.


 -- I am not sure of the respective roles of = and - in a logical context

Once again, which logic?. The type system that checks GHC core is itself a 
logical system. GHC core has recently ben rejigged so that type class 
constraints are just the types of dictionaries. In this case we have:

 forall (a: *). Pos a - a - Position

In DDC core, there are other sorts of constraints besides type class 
constraints. In early stages of the compiler we encode type class constraints 
as dependent kinds, so have this:

 forall (a: *). forall (_: Pos a). a - Position.

Both are good, depending on how you're transforming the core program.


 -- Is the following a fact at type level, class level or both?
 instance Pos Person where
  getPos (Person i p) = p

If you take the GHC approach, a type class declaration and instance is 
equivalent to this:

data Pos a 
 = PosDict { getPos :: Pos a - a - Position }

dictPosPerson :: Pos Person
dictPosPerson
 = PosDict (\d (Person i p) - p)

From this we've got two facts:
 Pos :: * - *
 dictPosPerson :: Pos Person

You could interpret this as:
 1) There is a set of types named Pos
 2) There is an element of this set named Person.


 -- Is it the evaluation or the type checking that provides a proof of
 type correctness?
 -- getPos(Person 1 2)

The type inferencer constructs a proof that a Haskell source program is well 
typed. It does this by converting it to GHC core, which is a formal logical 
system. The core program itself is a proof that there is a program which has 
its type. The type checker for GHC core then checks that this proof is valid.

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Haskell Implementors Workshop talk proposals due this Friday!

2011-07-20 Thread Ben Lippmeier
Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Haskell Implementors Workshop talk proposals due this Friday!

2011-07-20 Thread Ben Lippmeier
Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] Haskell Implementors Workshop 2011, Second CFT

2011-07-04 Thread Ben Lippmeier
Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)



___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] Haskell Implementors Workshop 2011, Second CFT

2011-07-04 Thread Ben Lippmeier
Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can it be proven there are no intermediate useful type classes between Applicative Functors Monads?

2011-06-05 Thread Ben Lippmeier

On 06/06/2011, at 5:51 , KC wrote:

 If new intermediate classes crop up then there would be no point in fixing
 
 class (Applicative m) = Monad m where
 
 since it would have to be changed if new intermediate classes are found.
 
 I realize non-existence proofs are hard.

Not as hard as formalising useful.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: repa: fromVector

2011-05-19 Thread Ben Lippmeier

On 19/05/2011, at 8:27 PM, Christian Höner zu Siederdissen wrote:

 I'd like to use repa in a rather perverted mode, I guess:
 
 for my programs I need to be able to update arrays in place and
 repeatedly perform operations on them.
 Right now, it basically works like this (in ST):
 
 - create unboxed space using primitive (same as unboxed vectors)
 - unsafefreeze unboxed space
 - perform calculations on frozen, immutable space
 - write result into mutable space (which is shared with the unsafefrozen
  space)

If you care deeply about inplace update, then you could use the parallel array 
filling functions directly. The ones in  D.A.Repa.Internals.Eval*.hs. For 2D 
images, use the fillVectorBlockwiseP [1] or fillCursoredBlock2P.


fillVectorBlockwiseP 
:: Elt a
= IOVector a   -- ^ vector to write elements into
- (Int - a)   -- ^ fn to evaluate an element at the given 
index
- Int  -- ^ width of image.
- IO ()


-- | Fill a block in a 2D image, in parallel.
--   Coordinates given are of the filled edges of the block.
--   We divide the block into columns, and give one column to each thread.
fillCursoredBlock2P
:: Elt a
= IOVector a   -- ^ vector to write elements into
- (DIM2   - cursor)   -- ^ make a cursor to a particular 
element
- (DIM2   - cursor - cursor) -- ^ shift the cursor by an offset
- (cursor - a)-- ^ fn to evaluate an element at the 
given index.
- Int  -- ^ width of whole image
- Int  -- ^ x0 lower left corner of block to fill
- Int  -- ^ y0 (low x and y value)
- Int  -- ^ x1 upper right corner of block to fill
- Int  -- ^ y1 (high x and y value, index of last elem 
to fill)
- IO ()


Actually, it might be worthwhile exporting these in the API anyway.

[1] 
http://code.ouroborus.net/repa/repa-head/repa/Data/Array/Repa/Internals/EvalBlockwise.hs
[2] 
http://code.ouroborus.net/repa/repa-head/repa/Data/Array/Repa/Internals/EvalCursored.hs



___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Is fusion overrated?

2011-05-18 Thread Ben Lippmeier

On 18/05/2011, at 15:55 , Roman Cheplyaka wrote:
 Of course I don't claim that fusion is useless -- just trying to
 understand the problem it solves. Are we saving a few closures and cons
 cells here?

And thunk allocations, and thunk entries. Entering a thunk costs upwards of 20 
cycles, while performing a single addition should only cost one. Imagine every 
thunk entry is a function call. You don't want to call a whole function just to 
add two numbers together.

Those few closures and cons cells can be surprisingly expensive when compared 
to native ALU instructions on a modern machine.

Ben.





___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] repa Shape help

2011-05-09 Thread Ben Lippmeier

On 09/05/2011, at 15:31 , bri...@aracnet.com bri...@aracnet.com wrote:

 main = do
  let x = A.fromList (AS.shapeOfList [2, 2]) ([1.0, 2.0, 3.0, 4.0]::[Double])
  putStrLn $ show x
 
 test_repa.hs:10:13:
Ambiguous type variable `sh' in the constraint:
  `Shape sh' arising from a use of `show' at test_repa.hs:10:13-18
Probable fix: add a type signature that fixes these type variable(s)
 Failed, modules loaded: none.
 
 After much staring at the type signatures I finally figured out that adding a 
 type annotation to x of :
 
  :: Array DIM2 Double
 
 would fix the problem, but I'm not completely clear as to why.

Because the GHC type system doesn't (yet) know that applying shapeOfList to a 
two element array should yield a DIM2.


 after all fromList is typed:
 
 (Shape sh, Elt a) = sh - [a] - Array sh a
 
 Since it knows [a] is [Double] and sh must be - well I'm not really clear on 
 what sh is supposed to be.  therein lies my problem.  Although it does seem 
 that sh can be darn near anything, which is probably why it was ambiguous.

Shape is dimensionally of the array. The only valid choices for sh are DIM1, 
DIM2, DIM3 etc for 1 dimensional, 2 dimensional etc arrays.


 At one point I had tried something like (2 :. 2) and got a whole host of 
 errors for that too, except that DIM2 is defined in exactly that way, so it's 
 not at all obvious why that didn't work.

Try (Z :. 2 :. 3). This is basically a list containing the column and row 
lengths. Similar to (3 : 2 : []), except that the list extends to the left 
instead of the right. The Z constructor is equivalent to [].


Cheers,
Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python is lazier than Haskell

2011-04-29 Thread Ben Lippmeier

On 29/04/2011, at 6:08 PM, Malcolm Wallace wrote:

 On 29 Apr 2011, at 05:38, Ben Lippmeier b...@ouroborus.net wrote:
 
 Laziness at the value level causes space leaks, 
 
 This is well-worn folklore, but a bit misleading.  

:-) Like permanent markers in the hands of children causes suffering. It's not 
a tautology, but an overgeneralisation that holds more often than not. 


 If anything, I think there is observation bias: lazy programmers have good 
 tools for identifying, finding, and removing leaks.  Others do not.

Sharp tools well honed through years of hunting them down. If only they were 
never there in the first place.


I don't disagree with you. My original comment was more bait than anything else.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Python is lazier than Haskell

2011-04-28 Thread Ben Lippmeier

On 27/04/2011, at 7:30 PM, Henning Thielemann wrote:

  If Haskell is great because of its laziness,
   then Python must be even greater,
   since it is lazy at the type level.

Laziness at the value level causes space leaks, and laziness at the type level 
causes mind leaks. Neither are much fun.

When people start wanting laziness at the kind level we'll have to quarantine 
them before the virus spreads...

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] CFT -- Haskell Implementors' Workshop 2011

2011-04-19 Thread Ben Lippmeier

Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


[Haskell-cafe] CFT -- Haskell Implementors' Workshop 2011

2011-04-19 Thread Ben Lippmeier

Call for Talks
ACM SIGPLAN Haskell Implementors' Workshop

http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2011
Tokyo, Japan, September 23rd, 2011
  The workshop will be held in conjunction with ICFP 2011
 http://www.icfpconference.org/icfp2011/

Important dates

Proposal Deadline:  22nd July  2011
Notification:8th August2011
Workshop:   23rd September 2011

The Haskell Implementors' Workshop is to be held alongside ICFP 2011
this year in Tokyo, Japan. There will be no proceedings; it is an
informal gathering of people involved in the design and development
of Haskell implementations, tools, libraries, and supporting
infrastructure.

This relatively new workshop reflects the growth of the user
community: there is a clear need for a well-supported tool chain for
the development, distribution, deployment, and configuration of
Haskell software.  The aim is for this workshop to give the people
involved with building the infrastructure behind this ecosystem an
opportunity to bat around ideas, share experiences, and ask for
feedback from fellow experts.

We intend the workshop to have an informal and interactive feel, with
a flexible timetable and plenty of room for ad-hoc discussion, demos,
and impromptu short talks.


Scope and target audience
-

It is important to distinguish the Haskell Implementors' Workshop from
the Haskell Symposium which is also co-located with ICFP 2011. The
Haskell Symposium is for the publication of Haskell-related research. 
In contrast, the Haskell Implementors' Workshop will have no
proceedings -- although we will aim to make talk videos, slides and 
presented data available with the consent of the speakers.

In the Haskell Implementors' Workshop we hope to study the underlying
technology. We want to bring together anyone interested in the nitty
gritty details necessary to turn a text file into a deployed product.
Having said that, members of the wider Haskell community are more than
welcome to attend the workshop -- we need your feedback to keep the
Haskell ecosystem thriving.

The scope covers any of the following topics. There may be some topics
that people feel we've missed, so by all means submit a proposal even
if it doesn't fit exactly into one of these buckets:

  * Compilation techniques
  * Language features and extensions
  * Type system implementation
  * Concurrency and parallelism: language design and implementation
  * Performance, optimisation and benchmarking
  * Virtual machines and run-time systems
  * Libraries and Tools for development or deployment


Talks
-

At this stage we would like to invite proposals from potential speakers
for a relatively short talk. We are aiming for 20 min talks with 10 mins
for questions and changeovers. We want to hear from people writing
compilers, tools, or libraries, people with cool ideas for directions in
which we should take the platform, proposals for new features to be
implemented, and half-baked crazy ideas. Please submit a talk title and
abstract of no more than 200 words to b...@cse.unsw.edu.au

We will also have a lightning talks session which will be organised on
the day. These talks will be 2-10 minutes, depending on available time.
Suggested topics for lightning talks are to present a single idea, a
work-in-progress project, a problem to intrigue and perplex Haskell
implementors, or simply to ask for feedback and collaborators.


Organisers
--

  * Rebekah Leslie   (Portland State University)
  * Ben Lippmeier - co-chair (University of New South Wales)
  * Andres Loeh  (Well-Typed LLP)
  * Oleg Lobachev(University of Marburg)
  * Neil Mitchell - co-chair (Standard Chartered)
  * Dimitrios Vytiniotis (Microsoft Research)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using DPH

2011-04-12 Thread Ben Lippmeier

On 12/04/2011, at 7:32 PM, Wilfried Kirschenmann wrote:

 Hi,
 
 In order to do a performance comparison beetween different approaches for our 
 application, I make different implementation of a simple example (computing 
 the norm of a vector expression.
 I rely on Repa to do this. 
 However, when I tried to build the parallel version (-threaded -fvectorise 
 -rtsopts), I got an error specifying that dph-par was not available. Indeed, 
 It wasn't.

Repa and DPH are different projects. The compilation mechanism and approach to 
parallelism is quite different between them. You only need -fvectorise to turn 
on the vectoriser for DPH code. You don't need (or want) -fvectorise for Repa 
programs. DPH is also still at the research prototype stage, and not yet at a 
point where you'd try to use it for anything real.

With your example code, you also need to use R.force at appropriate points, and 
add matches against @(Array _ [Region RangeAll (GenManifest _)]). The reasons 
for both of these are explained in [1]. Hopefully the second will be fixed by a 
subsequent GHC release. You must also add {-# INLINE fun #-} pragmas to 
polymorphic functions or you will pay the price of dictionary passing for the 
type class overloading.


With the attached code:

desire:tmp benl$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.0.3

desire:tmp benl$ ghc-pkg list |grep repa
repa-2.0.0.2
repa-algorithms-2.0.0.2
repa-bytestring-2.0.0.2
repa-io-2.0.0.2

desire:tmp benl$ ghc -rtsopts -threaded -O3 -fllvm -optlo-O3 -fno-liberate-case 
--make haskell.hs -XBangPatterns -fforce-recomp

desire:tmp benl$ /usr/bin/time ./haskell
[3.3645823e12]
72518800
6.62 real 6.39 user 0.22 sys


This runs but doesn't scale with an increasing number of threads. I haven't 
looked at why. If all the work is in R.sum then that might be the problem -- I 
haven't put much time into optimising reductions, just maps and filters.

Cheers,
Ben.

[1] http://www.cse.unsw.edu.au/~benl/papers/stencil/stencil-icfp2011-sub.pdf




haskell.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using DPH

2011-04-12 Thread Ben Lippmeier

On 12/04/2011, at 11:50 PM, Wilfried Kirschenmann wrote:

 surprisingly, when removing the R.force from the code you attached,
 performances are better (speed-up=2). I suppose but I am not sure that
 this allow for loop fusions beetween the R.map ant the R.sum.
 
 I use ghc 7.0.3, Repa 2.0.0.3 and LLVM 2.9.
 
 By the end, the performances with this new version (0.48s) is 15x
 better than my original version (6.9s)
 However, the equivalent sequential C code is still 15x better (0.034s).
 
 This may indeed be explained by the fact that all computations are
 performed inside the R.sum.

Yeah, the Repa fold and sum functions just use the equivalent Data.Vector ones. 
They're not parallelised and I haven't looked at the generated code. I'll add a 
ticket to the trac to fix these, but won't have time to work on it myself  in 
the near future.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: New codegen failing test-cases

2010-12-05 Thread Ben Lippmeier

On 06/12/2010, at 1:19 PM, David Terei wrote:

 I haven't looked at these branches for a fair few weeks, the problem
 when they fail to build usually is because all the libraries are just
 set to follow HEAD, they're not actually branched themselves, just the
 ghc compiler. So there are probably some patches from ghc HEAD that
 need to be pulled in to sync the compiler with the libs again. If you
 want to do some work on the new codegen the first step is to try to
 pull in all the patches from ghc HEAD, synchronise the branch. Its not
 a fun job but GHC HQ wants to try to merge in all the new codegen
 stuff to HEAD asap.
 
 libraries/dph/dph-par/../dph-common/Data/Array/Parallel.hs:1:14:
Unsupported extension: ParallelArrays
 make[1]: *** [libraries/dph/dph-par/dist-install/build/.depend-v.haskell] 
 Error 1
 make: *** [all] Error 2
 
 I can debug this further if you want me to.

We renamed the -XPArr language flag to -XParallelArrays. There was a patch to 
ghc-head that you'll have to pull or port across.

We're still actively working on DPH, and changes to the compiler often entail 
changes to the libraries.  If you haven't branched the libraries then your 
build is going to break on a weekly basis.

Ben.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Type Directed Name Resolution

2010-11-11 Thread Ben Lippmeier

On 12/11/2010, at 2:26 AM, Malcolm Wallace wrote:

 The point is that refusing something you can have now (though
 of course it's an open question whether TDNR is something we can have
 now) out of fear that it'll prevent you getting something better
 later is speculative and often backfires.
 
 I think we are very far from having TDNR now.  It is really quite 
 complicated to interleave name resolution with type checking in any compiler. 
  So far, we have a design, that's all, no implementation.  We also have 
 (several) designs for proper record systems.

Disciple has TDNR, and there is an implementation in DDC. It is a bit 
complicated, mainly because you can't determine the call graph of the program 
before starting inference. In ML style inference you're supposed to 
let-generalise groups of recursive bindings together, but for TDNR you can only 
determine what is recursive once you've resolved the names (which depends on 
the types, which you need to infer).

The algorithm is described starting at page 168 in my thesis here: 
http://www.cse.unsw.edu.au/~benl/papers/thesis/lippmeier-impure-world.pdf

Disciple doesn't have type functions or associated types though. I think it'd 
be nicer for GHC if we could leverage some of the other extensions, as 
suggested in Mark Lentczner's post.

Ben.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] New blog on Disciple/DDC development

2010-10-15 Thread Ben Lippmeier
The blog is at:
   http://disciple-devel.blogspot.com/

more about the project here:
   http://trac.haskell.org/ddc/

Ben.

   
___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] Re: Paralelism and Distribution in Haskell

2010-09-07 Thread Ben Lippmeier

On 07/09/2010, at 6:11 PM, Johannes Waldmann wrote:

 Mathew de Detrich deteego at gmail.com writes:
 
 Haskell is still by far one of the best languages 
 to deal with concurrency/parallelism. 
 
 Sure, I fully agree.
 
 I am using concurrency (with explicit forkIO, communication via Chan) 
 a lot (my Haskell application controls several external constraint solvers).
 
 For parallelism, I'm just missing some benchmark code
 that I can run on my machine (i7 CPU, GTX 295 GPU, ghc-6.12.3)
 more or less out-of-the-box and that will impress my students and myself.
 (That is, get a speed-up of 8, or 480, without the program
 looking 8 times (or 480 times) more ugly...)
 

The matrix-matrix multiplication benchmark from the Repa library does this.

Check out 
  http://www.cse.unsw.edu.au/~benl/papers/repa/repa-icfp2010.pdf
  http://hackage.haskell.org/package/repa
  http://hackage.haskell.org/package/repa-examples

Though be warned you must use a recent GHC head build to get good performance. 
After GHC 7.0 is out (in a few weeks) we'll be able to release a properly 
stable version.

Note that speedup is an important consideration, but not the end of the 
story. It's harder to find a benchmark that displays all of nice code + speedup 
+ good absolute performance. The first and last of these tend not to be friends.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help me evangelize haskell.

2010-09-05 Thread Ben Lippmeier

On 05/09/2010, at 2:38 AM, Michael Litchard wrote:

 I'll be starting a new job soon as systems tool guy. The shop is a
 perl shop as far as internal automation tasks go. But I am fortunate
 to not be working with bigots. If they see a better way, they'll take
 to it. So please give me your best arguments in favor of using haskell
 for task automation instead of perl, or awk or any of those scripting
 lanugages.

Try to avoid religious arguments like by using Perl you're living in a state 
of sin, and focus on look how much easier it is to do X in Haskell. 

Grandiose, hand-wavy assertions like strong typing leads to shorter 
development times and more reliable software don't work on people that haven't 
already been there and done that. When you try to ram something down someone's 
throat they tend to resist. However, if you can provide something tasty and 
appealing they'll eat it themselves. Write a nice program, show it to your Perl 
programmer, and if they also think it's nice -- then you've already won.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Deadlock with Repa

2010-07-29 Thread Ben Lippmeier

On 27/07/2010, at 11:24 PM, Jean-Marie Gaillourdet wrote:

 I've been trying to use repa and stumbled into rather strange behavior of my 
 program. Sometimes, there seems to be a deadlock at other times it seems 
 there is a livelock. Now, I've managed to prepare a version which 
 consistently generates the following output: 
 
 $ ./MDim
 MDim: thread blocked indefinitely in an MVar operation
 $
 
 But I don't use any MVar directly.  And the only used library which is not 
 part of ghc are repa and repa-algorithms. To state it clearly I don't use any 
 MVars, par, pseq, forkIO nor any other parallelism or cuncurrency 
 functionality. The only thing my program uses is repa which is supposed to 
 use some kind of parallelism as far as the documentation says. So I am 
 wondering whether this is a problem with my code or with repa or with ghc.

This is a symptom of not having calls to force in the right place. Suppose 
you've created a thunk that sparks of a parallel computation. If if some other 
parallel computation tries to evaluate it, then you've got nested parallelism. 
Operationally, it means that there was already a gang of threads doing 
something, but you tried to create a new one. 

The error message is poor, and we should really document it on the wik. 
However, if you get this message then the program should still give the correct 
result. If it's really deadlocking then it's a bug.


 I'd be really happy if anyone could give me a hint how to debug this, or 
 whether I am able to do anything about it, at all. 

You'll want to add more calls to force to ensure that appropriate 
intermediate arrays are in manifest form. Using seq and deepSeqArray can also 
help.

BTW: The Hackage repa package runs about 10x slower than it should against the 
current head, due to some changes in the inliner. I'm updating the package over 
the next few days, and I can also have a go at your example.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] DpH/repa cache locality

2010-07-18 Thread Ben Lippmeier

The difficulty with optimising for cache effects is that you're effectively 
introducing sequential dependencies between elements of the array you're 
processing. 

To say this another way: If you can evaluate the array elements in any order 
then you can evaluate them in parallel. Adding restrictions like element  i  
must be processed before element  i+1 can improve case usage but also 
restricts the evaluation order, and makes it less obvious how to parallelise 
the code.

For Repa, I think we'll end providing primitive operators for doing things like 
2d image convolutions. Moving from a linear image convolution, where the all 
the pixels in one row are processed before moving onto the next, to a blocked 
based convolution is really a change in algorithm -- and not something I'd 
expect a general purpose compiler optimisation to do.

Ben.


On 13/07/2010, at 9:49 , Gregory Crosswhite wrote:

 Hey everyone,
 
 Just out of curiosity, what work is being done in the data parallel
 haskell / repa projects regarding cache locality?  The reason I am
 asking is because, as I understand it, the biggest bottleneck on today's
 processors are cache misses, and the reason why optimized
 platform-specific linear algebra libraries perform well is because they
 divide the data into chunks that are optimally sized for the cache in
 order to maximize the number of operations performed per memory access. 
 I wouldn't expect data parallel haskell/repa to automatically know what
 the perfect chunking strategy should be on each platform, but are there
 any plans being made at all to do something like this?
 
 (To be explicit, this isn't meant as a criticism;  I'm just curious and
 am interested in seeing discussion on this topic by those more
 knowledgeable than I.  :-) )
 
 Thanks!
 Greg
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lambda calculus and equational logic

2010-07-14 Thread Ben Lippmeier

On 14/07/2010, at 6:26 , Patrick Browne wrote:

 Thanks for your clear and helpful responses.
 I am aware that this question can lead to into very deep water.
 I am comparing Haskell with languages based on equational logic (EL)
 (e.g. Maude/CafeOBJ, lets call them ELLs).  I need to identify the
 fundamental distinction between the semantics of ELLs and Haskell. The
 focus of my original question was just the purely functional, side
 effect free, part of Haskell.

If you ignore anything to do with the IO monad (or ST), then all of Haskell can 
be desugared to (untyped) call-by-name/need lambda calculus. If you stick with 
Haskell98 then you can desugar it to the rank-2 fragment of System-F + 
algebraic data types + case expressions + appropriate constants and primops. 
This is generally regarded as the Haskell Kernel Language, which is mentioned 
but explicitly not defined in the language standard.


 The relationship between the denotational and the proof theoretic
 semantic is important for soundness and completeness. Which was sort of
 behind my original question.
 
 Would it be fair to say
 1) Lambda calculus provides the operational semantics for Haskell

Notionally yes, but practically no. AFAIC there isn't a formal semantics for 
Haskell, but there is for fragments of it, and for intermediate representations 
like System-Fc (on which GHC is based). There are also multiple lambda calculi, 
depending on which evaluation mechanism you use.

The point I was trying to make in the previous message is what while Haskell 
includes the IO monad, people insist on calling the whole language purely 
functional and side effect free, which is murky territory. Sabry's What is 
a Purely Functional Language shows that unrestricted beta-reduction is not 
sound in a simple monadic language using a pass-the-world implementation -- 
though Wouter's paper gives a cleaner one. Another paper that might help is 
Sondergaard and Sestoft's highly readable Referential Transparency, 
Definiteness and Unfoldability.


 2) Maybe equational logic provides the denotational semantics.
 3)I am not sure of proof theoretic semantic for Haskell.
  The Curry-Howard correspondence is a proof theoretic view but only at
  type level.
 
 Obviously, the last three points are represent my efforts to address
 this question. Hopefully the café can comment on the accuracy of these
 points.

My (limited) understanding of Maude is that rewrites can happen at any point in 
the term being reduced. This is different from, say, the semantics of 
call-by-name lambda calculus which has a specific evaluation order. In Haskell 
it's no problem to pass a diverging expression to some function, which might 
store it in a data structure, but then discard it later. However, when rewrites 
can happen at any point in the term being reduced, if any part of it diverges 
then the whole thing does. This is just from skimming slides for Maude talks 
though...

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] lambda calculus and equational logic

2010-07-13 Thread Ben Lippmeier

On 13/07/2010, at 20:47 , Brent Yorgey wrote:

 On Wed, Jul 07, 2010 at 09:56:08AM +0100, Patrick Browne wrote:
 Hi,
 In Haskell what roles are played by 1)lambda calculus and 2) equational
 logic? Are these roles related?
 
 Hopefully this question can be answered at a level suitable for this forum.
 
 Since no one else has responded I'll take a quick stab at answering,
 and others can fill in more information as appropriate, or ask further
 questions.
 
  2) Haskell is able to embrace equational logic because of its
 insistence on purity: in a Haskell program (leaving aside for the
 moment things like seq and unsafePerformIO) you can always
 replace equals by equals (where equality is the normal
 beta-equality for System F omega, plus definitional equality
 introduced by Haskell declarations) without changing the
 semantics of your program. So the story of an equational logic
 for System F omega and the story of evaluating Haskell programs
 are to a large extent the same story.

Replacing equals by equals usually doesn't change anything. 

What kind of equality do you use for  getChar :: IO Char ?


  Coming up with equational
 logics corresponding to most imperative languages (or even a
 non-pure functional language like OCaml) is massively complicated
 by the need to keep track of an implicit state of the world due
 to the presence of side effects.

By massively complicated you mean harder than the simplest case. Haskell's 
do-notation makes the state of the world implicit, and performing the 
desugaring makes it explicit again -- but then that state isn't really the 
state of the word...

Sorry or my heckling. You gave a fine answer, to the standard question. 
However, I propose mandating that all such questions asked on the 
haskell-beginners list are answered with Haskell's purity solves everything 
-- but  on haskell-cafe they should get Haskell's purity solves everything, 
but read Sabry's paper on What is a Purely Functional Language, because it's 
really more subtle than that.

Cheers,
Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Multidimensional Matrices in Haskell

2010-07-11 Thread Ben Lippmeier

I've found using Data.Vector works fine for this, just write an indexing 
function to handle the multiple dimensions.

The gloss-examples package has a nice graphical demo of Conway's game of life 
that uses Vector. Gloss is specifically designed for beginners, so no monads 
required.

The code for the demo is at:
  http://code.haskell.org/gloss/gloss-stable/examples/Conway/

and the gloss homepage is at:
  http://trac.haskell.org/gloss/

Ben.


On 08/07/2010, at 12:08 AM, Mihai Maruseac wrote:

 Hi,
 
 A friend of mine wanted to do some Cellular Automata experiments in
 Haskell and was asking me what packages/libraries are there for
 multidimensional matrices. I'm interested in both immutable and
 mutable ones but I don't want them to be trapped inside a monad of any
 kind.
 
 Any hints?
 
 -- 
 MM
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Relating generated asm to Core

2010-06-23 Thread Ben Lippmeier

Hi Rami, 
You'll want to first look at the Cmm (C minus minus) code, which is the 
imperative intermediate language that GHC uses before conversion to assembly.

Do something like ghc -c Whatever.hs -ddump-cmm. The names of the blocks of 
cmm code should match the ones in core. If not, then you might want to also 
look at the output of -ddump-stg.

Keep in mind that the assembly output by GHC will not look like that output by, 
say, GCC, because of the lazy evaluation method. You'll want to try and ignore 
the vast swathes thunking code and just focus on the inner loops of your 
particular algorithm.

Ben.



On 23/06/2010, at 1:35 PM, Rami Mukhtar wrote:

 Hi,
  
 Can anyone tell me a way to identify the generated assembly (as found in the 
 intermediate files produced by GHC) corresponding to a particular fragment of 
 Core code.  
  
 Thanks,
 
 Rami
 
 The information in this e-mail may be confidential and subject to legal 
 professional privilege and/or copyright. National ICT Australia Limited 
 accepts no liability for any damage caused by this email or its attachments.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Stone age programming for space age hardware?

2010-06-07 Thread Ben Lippmeier

On 07/06/2010, at 3:05 AM, Michael Schuerig wrote:

 I have a hunch that the real restrictions of this kind of software are 
 not concerned with fixed memory, iterations, whatever, but rather with 
 guaranteed bounds. If that is indeed the case, how feasible would it be 
 to prove relevant properties for systems programmed in Haskell?

For full Haskell that includes laziness and general recursion: not very. 
Proving properties about the values returned by functions is one thing, but 
giving good guaranteed upper bounds to the time and space used by an arbitrary 
program can be very difficult.

See for example:

J ̈orgen Gustavsson and David Sands, Possibilities and limitations of 
call-by-need space improvement, ICFP 2001: Proc. of the International 
Conference on Functional Programming, ACM, 2001, pp. 265–276.

Adam Bakewell and Colin Runciman, A model for comparing the space usage of lazy 
evaluators, PPDP 2000: Proc. of the International Conference on Principles and 
Practice of Declarative Pro- gramming, ACM, 2000, pp. 151–162.

Hans-Wolfgang Loidl. Granularity in Large-Scale Parallel Functional 
Programming. PhD Thesis. Department of Computing Science, University of 
Glasgow, March 1998.


I expect future solutions for this domain will look more like the Hume (family 
of) languages [1]. They give several language levels, and can give stronger 
bounds for programs using less language features.

[1] http://www-fp.cs.st-andrews.ac.uk/hume/index.shtml


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] Work on Video Games in Haskell

2010-05-26 Thread Ben Lippmeier

On 27/05/2010, at 9:01 AM, Edward Kmett wrote:
 While we can all acknowledge the technical impossibility of identifying the 
 original source language of a piece of code...


Uh,

desire:tmp benl$ cat Hello.hs
main = putStr Hello

desire:tmp benl$ ghc --make Hello.hs

desire:tmp benl$ strings Hello | head
Hello
base:GHC.Arr.STArray
base:GHC.Arr.STArray
base:GHC.Classes.D:Eq
base:GHC.Classes.D:Eq
failed to read siginfo_t
 failed: 
Warning: 
select
buildFdSets: file descriptor out of range

...




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell] Re: [Haskell-cafe] Work on Video Games in Haskell

2010-05-26 Thread Ben Lippmeier

Objects in the heap also have a very regular structure. They all have code 
pointers as their first word, which point to info tables that also have a 
regular structure [1]. GHC produced code is probably one of the easiest to 
identify out of all compiled languages...

http://hackage.haskell.org/trac/ghc/wiki/Commentary/Rts/Storage/HeapObjects

Ben.


On 27/05/2010, at 1:15 PM, Daniel Peebles wrote:

 Next up, binary obfuscation! Apple already uses these extensively in their 
 Fairplay code. Surely it isn't against the rules (yet?) to apply them to your 
 program before submitting it to the store? :P
 
 On Wed, May 26, 2010 at 11:01 PM, Ben Lippmeier b...@ouroborus.net wrote:
 
 On 27/05/2010, at 9:01 AM, Edward Kmett wrote:
  While we can all acknowledge the technical impossibility of identifying the 
  original source language of a piece of code...
 
 
 Uh,
 
 desire:tmp benl$ cat Hello.hs
 main = putStr Hello
 
 desire:tmp benl$ ghc --make Hello.hs
 
 desire:tmp benl$ strings Hello | head
 Hello
 base:GHC.Arr.STArray
 base:GHC.Arr.STArray
 base:GHC.Classes.D:Eq
 base:GHC.Classes.D:Eq
 failed to read siginfo_t
  failed:
 Warning:
 select
 buildFdSets: file descriptor out of range
 
 ...
 
 
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Parallel Haskell: 2-year project to push real world use

2010-05-03 Thread Ben Lippmeier

You can certainly create an array with these values, but in the provided code 
it looks like each successive array element has a serial dependency on the 
previous two elements. How were you expecting it to parallelise?

Repa arrays don't support visible destructive update. For many algorithms you 
should't need it, and it causes problems for parallelisation.

I'm actively writing more Repa examples now.  Can you sent me some links 
explaining the algorithm that you're using, and some example data + output?

Thanks,
Ben.



On 04/05/2010, at 9:21 AM, Christian Höner zu Siederdissen wrote:

   a = array (1,10) [ (i,f i) | i -[1..10]] where
  f 1 = 1
  f 2 = 1
  f i = a!(i-1) + a!(i-2)
 
 (aah, school ;)
 
 Right now, I am abusing vector in ST by doing this:
 
 a - new
 a' - freeze a
 forM_ [3..10] $ \i - do
  write a (a'!(i-1) + a!(i-2))
 
 Let's say I wanted to do something like this in dph (or repa), does that
 work? We are actually using this for RNA folding algorithms that are at
 least O(n^3) time. For some of the more advanced stuff, it would be
 really nice if we could just parallelize.
 
 To summarise: I need arrays that allow in-place updates.
 
 Otherwise, most libraries that do heavy stuff (O(n^3) or worse) are
 using vector right now. On a single core, it performs really great --
 even compared to C-code that has been optimized a lot.
 
 Thanks and Viele Gruesse,
 Christian

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Parallel Haskell: 2-year project to push real world use

2010-05-03 Thread Ben Lippmeier

On 03/05/2010, at 10:04 PM, Johan Tibell wrote:

 On Mon, May 3, 2010 at 11:12 AM, Simon Peyton-Jones simo...@microsoft.com 
 wrote:
 | Does this mean DPH is ready for abuse?
 |
 | The wiki page sounds pretty tentative, but it looks like it's been awhile
 | since it's been updated.
 |
 | http://www.haskell.org/haskellwiki/GHC/Data_Parallel_Haskell
 
 In truth, nested data parallelism has taken longer than we'd hoped to be 
 ready for abuse :-).   We have not lost enthusiasm though -- Manual, Roman, 
 Gabi, Ben, and I talk on the phone each week about it.  I think we'll have 
 something usable by the end of the summer.
 
 That's very encouraging! I think people (me included) have gotten the 
 impression that the project ran into problems so challenging that it stalled. 
 Perhaps a small status update once in a while would give people a better idea 
 of what's going on. :)
 

I'm currently working full time on cleaning up Repa and adding more examples. 

I'll do a proper announcement on the mailing lists once I've got the wiki set 
up. It would have been today but community.haskell.org was flaking out 
yesterday.

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Proposal: Australian Hackathon

2010-03-16 Thread Ben Lippmeier

On 16/03/2010, at 10:45 PM, Alex Mason wrote:

 I'd suggest focusing on core Haskell infrastructure, like compilers and 
 tools, rather than individual libraries -- though it all depends on who 
 wants to come along.
 
 Basically, we're just aiming to get a bunch of like minded people together, 
 who want to hack on projects with some other people, possibly with the 
 authors of the projects (for example, I might want help to work on the 
 Accelerate library that Manuel, Gabriele and Sean have been working on, and 
 being able to talk to them directly to find out how the code is all laid out 
 and organised would be much much easier than trying to do the same thing over 
 IRC for example.)

I meant that with these systems there's more of a chance that people have past 
experience with them, so you can hit the ground running, but it's only a 
suggestion.


 You'll also want to consider how a proposed OzHaskell might align and/or 
 combine with other events such as SAPLING[1] and fp-syd[2]. There is also 
 the ICFP programming contest in a few months that many people will be 
 interested in...
 
 The more people we can get in touch with, the better, we'd like to hear from 
 all these groups, if for no better reason than to get the word out that such 
 a thing might be happening... maybe, and to help gauge interest. The more 
 people that know, the more pressure we can bring upon ourselves to get 
 something organised.
 
 I was planning on forwarding this onto the FP-Syd list, but maybe I could ask 
 you to do that Ben? These mailing list things are before my time, and I 
 wouldn't have a clue what to do -_-

You seem to have worked out haskell-cafe, so it can't be that hard!

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Australian Hackathon

2010-03-15 Thread Ben Lippmeier

On 16/03/2010, at 4:28 PM, Ivan Miljenovic wrote:

 * A plotting library using Ben's newly released Gloss library (for
 people who can't or won't install Gtk2Hs to get Chart working; Alex
 Mason is interested in this)
 * Various graph-related project (graphviz, generic graph class, etc.;
 this assumes someone else apart from me cares about this stuff)
 * Hubris if Mark Wotton comes along
 * LLVM if David Terei comes

I'd suggest focusing on core Haskell infrastructure, like compilers and tools, 
rather than individual libraries -- though it all depends on who wants to come 
along.


 So, at least as an initial listing, we'd need to have a listing of:
 1) Who's interested
 2) What dates are good
 3) What projects people want to work on
 4) Where we can host this

You'll also want to consider how a proposed OzHaskell might align and/or 
combine with other events such as SAPLING[1] and fp-syd[2]. There is also the 
ICFP programming contest in a few months that many people will be interested 
in...

Hosting is not a problem. If people want to come to Sydney then I'm sure we can 
organise a room at UNSW. 

Ben.


[1] http://plrg.ics.mq.edu.au/projects/show/sapling
[2] http://groups.google.com/group/fp-syd
[3] http://www.icfpconference.org/contest.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Fast Haskell Parser

2010-03-10 Thread Ben Lippmeier

Hi John, 
Doing a Google search for haskell parser returns the following link as its 
first result. That's the parser that GHC uses.

  http://www.haskell.org/happy/

You could also check out the following:

  http://www.haskell.org/haskellwiki/Parsec
  http://hackage.haskell.org/package/attoparsec

This would also be a perfect question to ask on the haskell-cafe mailing list...

Cheers,
Ben.


On 11/03/2010, at 10:39 AM, John D. Earle wrote:

 I was thinking of ways to create an efficient Haskell parser. My initial 
 thinking was to write a top down parser in Haskell, but if you want speed a 
 table driven approach may make greater sense.
 
 Due to the existence of build bots there is a certain degree of compliancy 
 concerning build times. I feel that convenience is an important factor. It 
 should be convenient to build the source. Build bots make an assumption, 
 namely the existence of a formal infrastructure. I believe that it should be 
 possible to build something from source casually.
 
 This is a less demanding goal than high performance incremental builds. It 
 would be nice to out perform make files because if you fail to do this, can 
 it really be said that you are making progress? Correctness appears to be a 
 low priority among computer programmers. That said, it may be worth investing 
 some time in advance to figuring out how to best achieve both objectives, 
 namely correctness and performance. Who knows skills acquired in one project 
 may be useful in another and performance is usually welcome.
 
 So my question is, What sort of tools and methodologies exist in Haskell to 
 create high performance parsers? My impression is the speed at which the 
 parser performs its task is not the bottle-neck, but the parser might as well 
 be designed to be efficient so as not to be intellectually lazy. It may even 
 turn out that the parser may need to be efficient merely to compensate for 
 the spawn of correctness, namely slow builds. 
 ___
 Cvs-ghc mailing list
 cvs-...@haskell.org
 http://www.haskell.org/mailman/listinfo/cvs-ghc

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell] ANN: gloss-1.0.0.2: Painless 2D vector graphics, animations and simulations.

2010-03-09 Thread Ben Lippmeier

Gloss hides the pain of drawing simple vector graphics behind a nice data type 
and a few display functions. Gloss uses OpenGL and GLUT under the hood, but you 
won't have to worry about any of that. Get something cool on the screen in 
under 10 minutes.


A simple animated example is:

  import Graphics.Gloss
  main = animateInWindow My Window (200, 200) (10, 10) white 
   $ \time - Rotate (time * 100) $ Color red $ Line [(0, 0), (100, 100)]

animateInWindow first takes the name, size, position and background color of 
the window. The final argument is a function from the time (in seconds) from 
when the program started, to a picture. Once the window is open you can pan 
around, zoom and rotate the animation using the mouse.


Pictures of more detailed examples are at:
  http://trac.haskell.org/gloss/

Try it out now with:
  cabal update
  cabal install gloss
  cabal install gloss-examples
  gloss-styrene

then right-click drag to rotate the box..

Ben.

___
Haskell mailing list
Haskell@haskell.org
http://www.haskell.org/mailman/listinfo/haskell


Re: [Haskell-cafe] What's the deal with Clean?

2009-11-03 Thread Ben Lippmeier

David Leimbach wrote:
Disciplined Disciple might be interesting to look at here too, but i'm 
not sure I'd deploy anything with DDC just yet :-)
:) Nor would I (and I wrote most of it). I think the approach is right, 
but the compiler itself is still in the research prototype stage.


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What's the deal with Clean?

2009-11-03 Thread Ben Lippmeier

David Leimbach wrote:
I have to admit, the first time I hit the wiki page for DDC I said to 
myself Self, this sounds crazy complicated.  Then I read part of the 
PDF (your thesis I believe) about Region Types on the bus ride to work 
and thought.  Gee I think I scared myself off too quickly.


Uniqueness typing is quite interesting in Clean, but to control 
aliasing, like really *control* aliasing, that's just far out man.


So I still have to wrap my head around why this isn't going to get 
completely out of control and see why it's all safer than just 
writing C code but I must say the attention I will be paying to DDC 
has just gone quite a bit up.


:) A correct C program is just as safe as a correct Haskell/Disciple 
program.


If you're using destructive update then aliasing, side effects and 
mutability all start to matter. It might look complicated when you 
reflect all these things in the type system, but you're really just 
getting a handle on the inherent complications of the underlying program.


I suppose the trick is to be able to ignore said complications when you 
just don't care, or they're not relevant for your particular problem...


Ben.







___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Status of GHC as a Cross Compiler

2009-09-24 Thread Ben Lippmeier


No, GHC won't be a native cross compiler in 6.12. There are #ifdefs  
through the code which control what target architecture GHC is being  
compiled for, and at the moment it doesn't support the host  
architecture being different from the target architecture.


I did some work on the native code generator this year which cleans up  
some of this, but it still needs several more weeks put into it to  
make it a real cross compiler.


Cheers,
Ben.


On 24/09/2009, at 5:24 AM, Donnie Jones wrote:


Hello John,

glasgow-haskell-users is a more appropriate list...
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

I went ahead and cc'd your message to the list.  Any replies please
include John's email address as I don't think he is subscribed to the
list.

Hope that helps...
--
Donnie Jones

On Wed, Sep 23, 2009 at 1:50 PM, John Van Enk vane...@gmail.com  
wrote:

Hi,

This may be more appropriate for a different list, but I'm having a  
hard
time figuring out whether or not we're getting a cross compiler in  
6.12 or

not. Can some one point me to the correct place in Traq to find this
information?

/jve

___
Haskell-Cafe mailing list
haskell-c...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
haskell-c...@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Re: DDC compiler and effects; better than Haskell?

2009-08-13 Thread Ben Lippmeier

Heinrich Apfelmus wrote:

Actually you need five versions: The pure version, the pre-order
traversal, the post-order traversal, the in-order traversal, and the
reverse in-order traversal.  And that is just looking at syntax.  If you
care about your semantics you could potentially have more (or less).



Exactly! There is no unique choice for the order of effects when lifting
a pure function to an effectful one.

For instance, here two different versions of an effectful  map :

   mapM f [] = return []
   mapM f (x:xs) = do
   y  - f x
   ys - mapM f xs
   return (y:ys)

   mapM2 f [] = return []
   mapM2 f (x:xs) = do
   ys - mapM2 f xs
   y  - f x
   return (y:ys)

Which one will the DCC compiler chose, given

   map f [] = []
   map f (x:xs) = f x : map f xs
  

Disciple uses default strict, left to right evaluation order. For
the above map function, if f has any effects they will be executed
in the same order as the list elements.


? Whenever I write a pure higher order function, I'd also have to
document the order of effects.
  


If you write a straight-up higher-order function like map above,
then it's neither pure or impure. Rather, it's polymorphic in the
effect of its argument function. When effect information is
added to the type of map it becomes:


map :: forall a b %r1 %r2 !e1
   .  (a -(!e1) b) - List %r1 a -(!e2) List %r2 b
   :- !e2 = !{ !Read %r1; !e1 }


Which says the effect of evaluating map is to read the list and
do whatever the argument function does. If the argument function
is pure, and the input list is constant, then the application
of map is pure, otherwise not.

If you want to define an always-pure version of map, which
only accepts pure argument functions then you can give it the
signature:

pureMap :: (a -(!e1) b) - List %r1 a - List %r2 b
   :- Pure !e1, Const %r1

.. and use the same definition as before.

Note that you don't have to specify the complete type in the
source language, only the bits you care about - the rest is
inferred.

Now if you try to pass pureMap an impure function, you get
an effect typing error.

Adding purity constraints allows you to write H.O functions
without committing to an evaluation order, so you can change
it later if desired.


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Derek Elkins wrote:

The compiler is supposed to be able to reorder non-strict
evaluation to do optimisations, but that can't be done if effects
could happen.



There's nothing special about non-strict evaluation that makes the
antecedent true.  Replacing non-strict with strict gives just as
much of a valid statement.  It is purity that allows (some) reordering
of evaluation.
  

Here are two effectful statements that can safely be reordered.

 print foo
 x := 5


here are two more

 y := 2
 z := 3

(provided y and z don't alias)


Purity allows some reordering of evaluation, so does knowing that
two effectful computations won't interfere.


Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Dan Doel wrote:
For instance: what effects does disciple support? Mutation and IO? 

You can create your own top-level effects which interfere
will all others, for example:

effect !Network;
effect !File;

readFile :: String -(!e) String
:- !e = !File

Now any function that calls readFile will also have a !File effect.

What if I 
want non-determinism, or continuations, etc.? How do I as a user add those 
effects to the effect system, and specify how they should interact with the 
other effects? As far as I know, there aren't yet any provisions for this, so 
presumably you'll end up with effect system for effects supported by the 
compiler, and monads for effects you're writing yourself.
  

Yep.

In Disciple, a computation has an effect if its evaluation cannot
safely be reordered with others having the same effect. That is,
computations have effects if they might interfere with others.

One of the goals of the work has been to perform compiler
optimisations without having to use IO-monad style state threading.
IO is very coarse grained, and using the IO monad for everything
tends to introduce more data-dependencies than strictly needed, which
limits what optimisations you can do.

Non-determinism and continuations are tricker things than the simple
notion of effects-as-interference, which I haven't got a good
solution for.

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Dan Doel wrote:
Off hand, I'd say I don't write foo and fooM versions of functions much in 
actual programs, either. Such duplication goes into libraries...

It would be ok if the duplication /was/ actually in the libraries,
but often it's not.

Note the lack of Data.Map.mapM and Data.Map.foldM. Want to apply a monadic
computation to all the elements of a Data.Map? Convert it to a list and 
back..


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: DDC compiler and effects; better than Haskell? (was Re: [Haskell-cafe] unsafeDestructiveAssign?)

2009-08-12 Thread Ben Lippmeier

Dan Doel wrote:

On Wednesday 12 August 2009 11:46:29 pm Ben Lippmeier wrote:
  

Dan Doel wrote:


Off hand, I'd say I don't write foo and fooM versions of functions much
in actual programs, either. Such duplication goes into libraries...
  

It would be ok if the duplication /was/ actually in the libraries,
but often it's not.

Note the lack of Data.Map.mapM and Data.Map.foldM. Want to apply a monadic
computation to all the elements of a Data.Map? Convert it to a list and
back..



Or use Data.Traversable.mapM and Data.Foldable.foldM.

  

Ah thanks, I didn't notice the Traversable instance. There are
other higher-order functions in Data.Map that don't seem to have
monadic counterparts though, like insertWith, unionsWith, updateAt ...

Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: TypeDirectedNameResolution

2009-07-29 Thread Ben Lippmeier


On 28/07/2009, at 6:41 AM, John Dorsey wrote:

I'm assuming that name resolution is currently independent of type
inference, and will happen before type inference.  With the proposal  
this is
no longer true, and in general some partial type inference will have  
to

happen before conflicting unqualified names are resolved.

My worry is that the proposal will require a compliant compiler to
interweave name resolution and type inference iteratively.

To my untrained eye it looks complicated and invasive, even without  
the
mutually recursive case.  Can anyone shed light on whether this  
would be a

problem for, say, GHC?



My experimental compiler DDC [1] implements TDNR almost exactly as  
given on the Haskell' wiki.


Yes, you have to interweave name resolution with type inference,  
because there is no way to compute the binding dependency graph/call  
graph before type inference proper. This is discussed in section 3.5  
of my thesis [2] (which is currently under examination). For DDC I  
used a constraint based inference algorithm to compute the binding  
dependency graph on the fly, but I don't know how easy it would be  
to retrofit this method into GHC.


Cheers,
Ben.


[1] http://www.haskell.org/haskellwiki/DDC
[2] 
http://cs.anu.edu.au/people/Ben.Lippmeier/project/thesis/thesis-lippmeier-sub.pdf




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ThreadScope: Request for features for the performance tuning of parallel and concurrent Haskell programs

2009-03-11 Thread Ben Lippmeier


Hi Satnam,

On 12/03/2009, at 12:24 AM, Satnam Singh wrote:
Before making the release I thought it would be an idea to ask  
people what other features people would find useful or performance  
tuning. So if you have any suggestions please do let us know!




Is it available in a branch somewhere to try out?

Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] ThreadScope: Request for features for the performance tuning of parallel and concurrent Haskell programs

2009-03-11 Thread Ben Lippmeier


Hi Satnam,

On 12/03/2009, at 12:24 AM, Satnam Singh wrote:
Before making the release I thought it would be an idea to ask  
people what other features people would find useful or performance  
tuning. So if you have any suggestions please do let us know!




Is it available in a branch somewhere to try out?

Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] I want to write a compiler

2009-03-08 Thread Ben Lippmeier


On 08/03/2009, at 12:45 PM, Austin Seipp wrote:


For garbage collection, please see.

Accurate Garbage Collection in an Uncooperative Environment -
http://citeseer.ist.psu.edu/538613.html

This strategy is currently used in Mercury as well as Ben L.'s DDC
language; on that note, I think if you spent some time looking through
the runtime/generated code of DDC, you can see exactly what the paper
is talking about, because it's actually a very simple strategy for
holding onto GC roots:

http://code.haskell.org/ddc/ddc-head/runtime/


That paper explains the basic idea, but neither DDC or Mercury quite  
follow it (I asked Zoltan). The system in the paper keeps the GC roots  
in structs on the C stack, and chains the structs together as a linked  
list. The problem is that if you take a pointer to data on the C stack  
then GCC freaks out and disables a host of optimisations. I imagine  
it's worried about pointers going bad after the stack frame is popped  
and the space for the struct gets lost.


DDC keeps a shadow stack of GC roots in malloced memory. It's only a  
small difference, but lets the C compiler produce better code.


Ben.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Type (class) recursion + families = exponential compile time?

2009-02-26 Thread Ben Lippmeier


Here's the reference
http://portal.acm.org/citation.cfm?id=96748

Deciding ML typability is complete for deterministic exponential  
time -- Harry G. Mairson.


Ben.


On 27/02/2009, at 10:12 AM, Ben Franksen wrote:


Hi

the attached module is a much reduced version of some type-level  
assurance
stuff (inspired by the Lightweight Monadic Regions paper) I am  
trying to
do. I am almost certain that it could be reduced further but it is  
late and

I want to get this off my desk.

Note the 4 test functions, test11 .. test14. The following are  
timings for
compiling the module only with all test functions commented out,  
except

respectively, test11, test12, test13, and test14:

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  1,79s user 0,04s system 99% cpu 1,836 total

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  5,87s user 0,14s system 99% cpu 6,028 total

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  23,52s user 0,36s system 99% cpu 23,899 total

b...@sarun[1]  time ghc -c Bug2.hs
ghc -c Bug2.hs  102,20s user 1,32s system 97% cpu 1:45,89 total

It seems something is scaling very badly. You really don't want to  
wait for

a version with 20 levels of nesting to compile...

If anyone has a good explanation for this, I'd be grateful.

BTW, I am not at all certain that this is ghc's fault, it may well  
be my
program, i.e. the constraints are too complex, whatever. I have no  
idea how
hard it is for the compiler to do all the unification. Also, the  
problem is
not of much practical relevance, as no sensible program will use  
more than

a handfull levels of nesting.

Cheers
Ben
Bug2.hs___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: [Haskell-cafe] Performance question

2009-02-26 Thread Ben Lippmeier


Yep, this program will crawl.

You can get reasonable numeric performance out of GHC, but you need to  
work at it. There is some advice in the GHC manual at http://www.haskell.org/ghc/docs/latest/html/users_guide/faster.html 
.


The first thing I would do is replace your
isInCircle :: (Floating a, Ord a)  = (a,a) - Bool
with
isInCircle :: (Double, Double) - Bool

Ben.



On 26/02/2009, at 8:53 PM, hask...@kudling.de wrote:


Hi,

i have compared a C++ implementation with a Haskell implementation  
of the Monte Carlo Pi approximation:


http://lennart.kudling.de/haskellPi/

The Haskell version is 100 times slower and i wonder whether i do  
something obvious wrong.


Profiling says that the majority of the time is spend in main. But  
i have no idea where.


Can someone give me a hint?

Thanks,
Lenny




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Performance question

2009-02-26 Thread Ben Lippmeier


On 26/02/2009, at 9:27 PM, hask...@kudling.de wrote:


Currently i can only imagine to define a data type in order to use  
unboxed Ints instead of the accumulator tuple.


That would probably help a lot. It would also help to use two separate  
Double# parameters instead of the tuple.


The thing is that i don't see in the profile output yet what to  
improve.
There are some allocations going on in main, but i don't know what  
causes it.



The first thing I would do is replace your
isInCircle :: (Floating a, Ord a)  = (a,a) - Bool
with
isInCircle :: (Double, Double) - Bool


Can you point me to why that matters?


At the machine level, GHC treats the (Floating a, Ord a) as an extra  
argument to the function. This argument holds function pointers that  
tell it how to perform multiplication and = for the unknown type 'a'.  
If you use Double instead of 'a', then it's more likely to use the  
actual machine op.


Ben.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Re[2]: [Haskell-cafe] Re[4]: [Haskell] Google Summer of Code

2009-02-11 Thread Ben Lippmeier


A: X has some problems with runtime performance.
B: My work solves all your problems. There is no problem.

Beware of the Turing tar-pit in which everything is possible but  
nothing of interest is easy - Alan Perlis.


can /= can be bothered.

:)

Ben.


On 12/02/2009, at 5:26 PM, Daniel Peebles wrote:


These seem to be good starting points:

http://donsbot.wordpress.com/2008/05/06/write-haskell-as-fast-as-c-exploiting-strictness-laziness-and-recursion/
http://donsbot.wordpress.com/2008/06/04/haskell-as-fast-as-c-working-at-a-high-altitude-for-low-level-performance/
http://haskell.org/haskellwiki/Wc


On Wed, Feb 11, 2009 at 8:15 PM, Bulat Ziganshin
bulat.zigans...@gmail.com wrote:

Hello Don,

Thursday, February 12, 2009, 3:45:36 AM, you wrote:

You should do your own benchmarking!


well, when you say that ghc can generate code that is fast as gcc, i
expect that you can supply some arguments. is the your only argument
that ghc was improved in last years? :)



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Animated line art

2008-12-05 Thread Ben Lippmeier


On 06/12/2008, at 6:34 AM, Andrew Coppin wrote:


Ben Lippmeier wrote:

The ANUPlot graphics library I wrote does exactly this.
The darcs repo is at http://code.haskell.org/ANUPlot/ANUPlot-HEAD/
It comes with lots of examples that do the sort of things you  
describe.


Does it handle drawing lines and circles (with antialiasing)? Can I  
save the output as PNG?


Lines and circles yes, antialiasing no. It uses OpenGL for rendering,  
so maybe there's a flag to turn it on. PNG isn't usually required for  
animations. When I need to make an image I just do a screenshot.


Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Animated line art

2008-12-04 Thread Ben Lippmeier




On 05/12/2008, at 10:46 AM, Tim Docker wrote:
Someone else already mentioned FRAN and it's ilk. But perhaps you  
don't

need something that fancy. If you implement your drawing logic as a
function from time to the appropriate render actions, ie

| import qualified Graphics.Rendering.Cairo as C
|
| type Animation = Time - C.Render ()

then you just need to call this function multiple times to generate
sucessive frames.


The ANUPlot graphics library I wrote does exactly this.
The darcs repo is at http://code.haskell.org/ANUPlot/ANUPlot-HEAD/
It comes with lots of examples that do the sort of things you describe.

Ben.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: SHA1.hs woes, was Version control systems

2008-08-19 Thread Ben Lippmeier


On 19/08/2008, at 8:57 PM, Ian Lynagh wrote:


On Mon, Aug 18, 2008 at 09:20:54PM +1000, Ben Lippmeier wrote:


Ian: Did this problem result in Intel CC / GCC register allocator
freakouts?


Have you got me confused with someone else? I don't think I've ever  
used

Intel CC.



Sorry, I couldn't find the rest of the preceding message. Someone  
wrote that they had to turn down cc flags to get SHA1.hs to compile on  
IA64.


What C compiler was being used, and what were the symptoms?

SHA1.hs creates vastly more register pressure than any other code I  
know of (or could find), but only when -O or -O2 is enabled in GHC. If  
-O and -prof are enabled then the linear allocator runs out of stack  
slots (last time I checked).


I'm wondering three things:

1) If the C compiler could not compile the C code emitted by GHC then  
maybe we should file a bug report with the CC people.


2) If the register pressure in SHA1.hs is more due to excessive code  
unfolding than the actual SHA algorithm, then maybe this should be  
treated as a bug in the simplifier(?) (sorry, I'm not familiar with  
the core level stuff)


3) Ticket #1993 says that the linear allocator runs out of stack  
slots, and the graph coloring allocator stack overflows when trying to  
compile SHA1.hs with -funfolding-use-threshold20. I'm a bit worried  
about the stack over-flow part.


The graph size is O(n^2) in the number of vreg conflicts, which isn't  
a problem for most code. However, if register pressure in SHA1.hs is  
proportional to the unfolding threshold (especially if more than  
linearly) then you could always blow up the graph allocator by setting  
the threshold arbitrarily high.


In this case maybe the allocator should give a warning when the  
pressure is high and suggest turning the threshold down. Then we could  
close this issue and prevent it from being re-opened.


Cheers,
Ben.

___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


Re: Version control systems

2008-08-18 Thread Ben Lippmeier


On 18/08/2008, at 8:13 PM, Simon Marlow wrote:
So would I usually, though I've had to turn down cc flags to get  
darcs
to build on ia64 before (SHA1.hs generates enormous register  
pressure).


We should really use a C implementation of SHA1, the Haskell version  
isn't buying us anything beyond being a stress test of the register  
allocator.




.. and perhaps a test case for too much code unfolding in GHC? Sounds  
like bugs to me. :)


If you turn down GHC flags the pressure also goes away.

Ian: Did this problem result in Intel CC / GCC register allocator  
freakouts?


Ben.


___
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users


  1   2   >