[Haskell-cafe] A friendly reminder: Ghent-FPG meeting on 26 June, 2013

2013-06-24 Thread Andy Georges

Hello,


This is to remind you that you are kindly invited to attend our next meeting. 
The original email follows below.


The Functional Programming Group Ghent (GhentFPG) [1] is a friendly group for
all people interested in functional programming, with a tendency towards 
Haskell.
It is organised as part of Zeus WPI [2].

We are pleased to announce that we will hold a next meeting on Wednesday, 26th
of June, starting at 19h00! There will be three talks.


The main presentation, by Adam Bergmark from Silk [3] is about Fay [4]:

 Fay is a proper subset of Haskell that compiles to JavaScript. There is a
 compiler with the same name written in Haskell. Web browsers only speak
 JavaScript but more and more people find that they want to compile to
 JavaScript instead.

 Why do we want to compile Haskell to JavaScript, and what advantages does
 Fay have compared to other compilers?

 What are the challenges in compiling Haskell and supporting a language
 ecosystem, and how do we do it?

 What can Fay currently do, and what is planned for the future?

 This will be a broad overview about Fay for prospective users, followed by
 an in-depth look at interesting parts of the compiler internals.


Additionally, there will be two short talks by two students who did an Msc. 
Thesis
about functional programming languages:

 Genetic Algorithms in Haskell by Matthias Delbar
 Automatic Detection of Recursion Patterns by Jasper Van der Jeugt


The meeting will take place in the Jozef Plateauzaal at the following address, 

Faculteit Ingenieurswetenschappen 
Universiteit Gent
Plateaustraat 22
9000 Gent

As mentioned above, we aim to start at 19:00. After the meeting we can go 
for drinks in a nearby pub (this latter part is, of course, completely optional)

We hope to see you all there!

Regards,
On behalf of the GhentFPG organising committee.


[1]: http://groups.google.com/group/ghent-fpg
[2]: http://zeus.ugent.be/
[3]: http://www.silkapp.com/
[4]: http://www.fay-lang.org/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: proposal: HaBench, a Haskell Benchmark Suite

2010-06-25 Thread Andy Georges
Hi Simon et al,

On Jun 25, 2010, at 14:39 PM, Simon Marlow wrote:

 On 25/06/2010 00:24, Andy Georges wrote:
 
 snip 
 Are there any inputs available that allow the real part of the suite
 to run for a sufficiently long time? We're going to use criterion in
 any case given our own expertise with rigorous benchmarking [3,4],
 but since we've made a case in the past against short running apps on
 managed runtime systems [5], we'd love to have stuff that runs at
 least in the order of seconds, while doing useful things. All
 pointers are much appreciated.
 
 The short answer is no, although some of the benchmarks have tunable input 
 sizes (mainly the spectral ones) and you can 'make mode=slow' to run those 
 with larger inputs.
 
 More generally, the nofib suite really needs an overhaul or replacement.  
 Unfortunately it's a tiresome job and nobody really wants to do it. There 
 have been various abortive efforts, including nobench and HaBench.  Meanwhile 
 we in the GHC camp continue to use nofib, mainly because we have some tool 
 infrastructure set up to digest the results (nofib-analyse).  Unfortunately 
 nofib has steadily degraded in usefulness over time due to both faster 
 processors and improvements in GHC, such that most of the programs now run 
 for less than 0.1s and are ignored by the tools when calculating averages 
 over the suite.

Right. I have the distinct feeling this is a major lack in the Haskell world. 
SPEC evolved over time to include larger benchmarks that still excercise the 
various parts of the hardware, such that the benchmarks does not achieve 
suddenly a large improvement on a new architecture/implementation due to e.g. a 
larger cache and the working sets remain in the cache for the entire execution. 
The Haskell community has nothing that remotely resembles a decent suite. You 
could do experiments and show that over 10K iterations, the average execution 
time per iteration goes from 500ms to 450ms, but what does this really mean? 

 We have a need not just for plain Haskell benchmarks, but benchmarks that test
 
 - GHC extensions, so we can catch regressions
 - parallelism (see nofib/parallel)
 - concurrency (see nofib/smp)
 - the garbage collector (see nofib/gc)
 
 I tend to like quantity over quality: it's very common to get just one 
 benchmark in the whole suite that shows a regression or exercises a 
 particular corner of the compiler or runtime.  We should only keep benchmarks 
 that have a tunable input size, however.

I would suggest that the first category might be made up of microbenchmarks, as 
I do not think it really is needed for performance per se. However, the other 
categories really need long-running benchmarks, that use (preferable) heaps of 
RAM, even when they're well tuned.

 Criterion works best on programs that run for short periods of time, because 
 it runs the benchmark at least 100 times, whereas for exercising the GC we 
 really need programs that run for several seconds.  I'm not sure how best to 
 resolve this conflict.

I'm not sure about this. Given the fact that there's quite some non-determinism 
in modern CPUs and that computer systems seem to behave chaotically [1], I 
definitely see the need to employ Criterion for longer running applications as 
well. It might not  need 100 executions, or multiple iterations per execution 
(incidentally, those iterations, can they be said to be independent?), but 
somewhere around 20 - 30 seems to be a minimum. 

 
 Meanwhile, I've been collecting pointers to interesting programs that cross 
 my radar, in anticipation of waking up with an unexpectedly free week in 
 which to pull together a benchmark suite... clearly overoptimistic!  But I'll 
 happily pass these pointers on to anyone with the inclination to do it.


I'm definitely interested. If I want to make a strong case for my current 
research, I really need benchmarks that can be used. Additionally, coming up 
with a good suite, characterising it, can easily result is a decent paper, that 
is certain to be cited numerous times. I think it would have to be a 
group/community effort though. I've looked through the apps on the Haskell wiki 
pages, but there's not much usable there, imho. I'd like to illustrate this by 
the dacapo benchmark suite [2,3] example. It took a while, but now everybody in 
the Java camp is (or should be) using these benchmarks. Saying that we just do 
not want to do this, is simply not plausible to maintain. 


-- Andy


[1]  Computer systems are dynamical systems, Todd Mytkowicz, Amer Diwan, and 
Elizabeth Bradley, Chaos 19, 033124 (2009); doi:10.1063/1.3187791 (14 pages).
[2] The DaCapo benchmarks: java benchmarking development and analysis, Stephen 
Blackburn et al, OOPSLA 2006
[3] Wake up and smell the coffee: evaluation methodology for the 21st century, 
Stephen Blackburn et al, CACM 2008

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org

RE: [Haskell-cafe] proposal: HaBench, a Haskell Benchmark Suite

2010-06-24 Thread Andy Georges
Hi Simon et al,


I've picked up the HaBench/nofib/nobench issue again, needing a decent set of 
real applications to do some exploring of what people these days call 
split-compilation. We have a framework that was able to explore GCC 
optimisations [1] -- the downside there was the dependency of these 
optimisations on each other, requiring them to be done in certain order -- for 
a multi-objective search space, and extended this to exploring a JIT compiler 
[2] for Java in our case -- which posed its own problems. Going one step 
further, we'd like to  explore the tradeoffs that can be made when compiling on 
different levels: source to bytecode (in some sense) and bytecode to native. 
Given that LLVM is quicly becoming a state-of-the-art framework and with the 
recent GHC support, we figured that Haskell would be an excellent vehicle to 
conduct our exploration and research (and the fact that some people at our lab 
have a soft spot for Haskell helps too). Which brings me back to benchmarks.

Are there any inputs available that allow the real part of the suite to run for 
a sufficiently long time? We're going to use criterion in any case given our 
own expertise with rigorous benchmarking [3,4], but since we've made a case in 
the past against short running apps on managed runtime systems [5], we'd love 
to have stuff that runs at least in the order of seconds, while doing useful 
things. All pointers are much appreciated.

Or if any of you out there have (recent) apps with inputs that are open source 
... let us know.

-- Andy


[1] COLE: Compiler Optimization Level Exploration, Kenneth Hoste and Lieven 
Eeckhout, CGO 2008
[2] Automated Just-In-Time Compiler Tuning, Kenneth Hoste, Andy Georges and 
Lieven Eeckhout, CGO 2010
[3] Statistically Rigorous Java Performance Evaluation, Andy Georges, Dries 
Buytaert and Lieven Eeckhout, OOPSLA 2007
[4] Java Performance Evaluation through Rigorous Replay Compilation, Andy 
Georges, Lieven Eeckhout and Dries Buytaert, OOPSLA 2008
[5] How Java Programs Interact with Virtual Machines at the Microarchitectural 
Level, Lieven Eeckhout, Andy Georges, Koen De Bosschere, OOPSLA 2003


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] help optimizing memory usage for a program

2009-03-02 Thread Andy Georges



Hi Kenneth,


I've thrown my current code online at http://boegel.kejo.be/files/Netflix_read-and-parse_24-02-2009.hs 
 ,

let me know if it's helpful in any way...


Maybe you could set up a darcs repo for this, such that we can submit  
patches against your code?



-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Moving nobench towards HaBench

2009-01-22 Thread Andy Georges

Hello,


A while back, we had a discussion on #haskell about assembling a  
Haskell benchmark suite, that is suitable for doing performance tests.  
A preliminary page was erected athttp://www.haskell.org/haskellwiki/HaBench 
. In the meantime, Donald Steward extended the original nofib suite  
with some shootout benchmarks afaik, resulting in nobench. The code  
base for the latter currently resides at http://code.haskell.org/nobench/ 
.


I have been trying to get it running on GHC 6.10.1. For now, I added a  
number of type definitions to the code, causing the build/runtime  
system to compile. The same probably ought to be done for the  
benchmarks themselves, unless there is a cheat around this using some  
language extension. Anyhow, I'll post a patch against the current  
repository as soon as I have a number of benchmarks running.


The main issue that still remains is the availability of real life  
benchmarks. I agree with the fact that micro-benchmarks can be useful  
for testing purposes or measures the efficacy and effectiveness of  
certain optimisations, yet I firmly believe any community has need of  
a set of benchmarks that actually reflects the real life usage of the  
language. I am think along the lines of something alike to the DaCapo  
projects, which assembled a number of very good benchmarks for the  
Java language and its VM. So the question basically boils down to  
this. Is there anybody interested in making the move toward HaBench,  
and if so, do you know of real life benchmarks that can serve for this  
exact purpose?


The benchmarks should preferably execute for  10s on modern machines,  
using a decent amount of RAM (say somewhere between 50 and 500MB),  
thus exercising all parts of a modern computing system. The code  
should not be trivial and the set of benchmarks should eventually  
cover the most uses of Haskell in the industry. Of course, the  
benchmarks themselves should be open source. If possible, they should  
come with multiple inputs, allowing a short (test) run as well as  
longer measurement runs.



If you are able and willing to help out, drop by at the HaBench page  
and drop a line,



-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell Cookbook?

2007-03-07 Thread Andy Georges

Hi,

On 1 Feb 2007, at 00:50, Alexy Khrabrov wrote:


Also see that sequence.complete.org has many code snippets in the blog
section.  What would be a good way to systematize all such snippets
together with hpaste.org and those scrolling through the mailing list?
Perhaps some kind of ontology of snippets like the table of contents
of a cookbook?


How about using a tag cloud? Alike to del.icio.us?

-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] proposal: HaBench, a Haskell Benchmark Suite

2007-01-28 Thread Andy Georges

Hi,

Following up and the threads on haskell and haskell-cafe, I'd like  
to gather ideas, comments and suggestions for a standarized Haskell  
Benchmark Suite.


The idea is to gather a bunch of programs written in Haskell, and  
which are representative for the Haskell community (i.e. apps,  
libraries, ...). Following the example of SPEC (besides the fact  
that the SPEC benchmarks aren't available for free), we would like  
to build a database containing performance measurements for the  
various benchmarks in the suite. Users should be able to submit  
their results. This will hopefully stimulate people to take  
performance into account when writing a Haskell program/library,  
and will also serve as a valuable tool for further optimizing both  
applications written in Haskell and the various Haskell compilers  
out there (GHC, jhc, nhc, ...).


This thread is meant to gather peoples thought on this subject.
Which programs should we consider for the first version of the  
Haskell benchmark suite?
How should we standarize them, and make them produce reliable  
performance measurement?
Should we only use hardware performance counters, or also do more  
thorough analysis such as data locality studies, ...
Are there any papers available on this subject (I know about the  
paper which is being written as we speak ICFP, which uses PAPI as a  
tool).


I think that we should have, as David Roundy pointed out, a  
restriction to code that is actually used frequently. However, I  
think we should make a distinction between micro-benchmarks, that  
test some specific item, and real-life benchmarks. When using micro  
benchmarks, the wrong conclusions may be drawn, because e.g., code or  
data can be completely cached, there are no TLB misses after startup,  
etc. I think that is somebody is interested in knowing how Haskell  
performs, and if he should use it for his development, it is nice to  
know that e.g., Data.ByteString performs as good as C, but is would  
be even nicer to see that large, real-life apps can reach that same  
performance. There is more to the Haskell runtime than simply  
executing application code, and these things should also be taken  
into account.


Also, I think that having several compilers for the benchmark set is  
a good idea, because, afaik, they can provide a different runtime  
system as well. We know that in Java, the VM can have a significant  
impact on behaviour on the microprocessor. I think that Haskell may  
have similar issues.


Also, similar to SPEC CPU, it would be nice to have input sets for  
each benchmark that gets included into the set. Furthermore, I think  
that we should provide a rigorous analysis of the benchmarks on as  
many platforms as is feasible. See e.g., the analysis done for the  
Dacapo Java benchmark suite, published at OOPSLA 2006.


-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] proposal: HaBench, a Haskell Benchmark Suite

2007-01-28 Thread Andy Georges


On 28 Jan 2007, at 12:57, Joel Reymont wrote:



On Jan 28, 2007, at 8:51 AM, Andy Georges wrote:

it is nice to know that e.g., Data.ByteString performs as good as  
C, but is would be even nicer to see that large, real-life apps  
can reach that same performance.


What about using darcs as a benchmark? I heard people say it's  
slow. The undercurrent is that it's slow because it's written in  
Haskell.


I have pondered about that. What would the input set be? And how to  
repeatedly run the benchmark? Should we just have a recording phase?  
Or a diff phase? It seems difficult to have a VC system as a benchmark.


-- Andy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Announce: Package rdtsc for reading IA-32 time stamp counters

2007-01-04 Thread Andy Georges

Hi,


version 1.0 of package rdtsc has just been released.

This small package contains one module called 'Rdtsc.Rdtsc'.


I am wondering what it would take to get rdpmc in there as well. Of  
course, you'd need some way to set the pmcs before running, but that  
can be done using e.g. perfctr. I'd like to take a swing at  
implementing this, unless somebody else volunteers or thinks it's  
basically useless.


-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Low-level Haskell profiling [Was: Re: [Haskell-cafe] Re: A suggestion for the next high profile Haskell project]

2006-12-21 Thread Andy Georges

Alexey,

Well, AFAIK, PAPI abstracts away the platform dependencies quite  
well, so I guess your code can be run straightforward on all IA-32  
platforms (depending on the events you wish to measure, which may  
or may not be present on all platforms). PowerPC, Itanium, Mips,  
Alpha should work as well, IIRC. If the GHC backend can generate  
code there, that is.


As the code stands now, data cache misses can be measured in a  
platform independent way. For branch mispredictions, I am using  
Opteron specific counters for reasons I no longer remember. Maybe I  
couldn't find platform independent counters in PAPI for branch  
misprediction.


Hmm, I think they should be there, IIRC. Anyway, it seems quite cool  
that you're doing that.






Have you published anything about that?


We are on the process of writing such a paper right now. My wish is  
to have the related code submitted to the head as soon as  
possible :). But at the moment we still have to tweak and clean up  
our optimisations a bit more.


Nice. Would you mind letting me know when you submitted something ...  
I'm quite interested.


I should get around to start a wiki page about using PAPI these  
days, but meanwhile feel free to contact me if you need further  
information or help.


I've been toying with this idea for a while [4], but never had the  
time to do something with it. If you have some cool stuff, let us  
know. I'm very interested.


The code in the head will allow you to get numbers for the Mutator  
and the GC separately. Also, I have hacked nofib-analyse so you can  
compare CPU statistics among different runs of the nofib suite.  
This is not committed yet, I guess it will make its way to the PAPI  
wiki page once it's up. I will let you know when I bring the page up.


Great, thanks!

-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Low-level Haskell profiling [Was: Re: [Haskell-cafe] Re: A suggestion for the next high profile Haskell project]

2006-12-20 Thread Andy Georges

Hi,

The GHC head can currently build against PAPI[1], a library for  
gathering CPU statistics.


I did not know that. I know PAPI, though I prefer using perfctr  
directly, at least for what I'm doing (stuff in a JVM) [1], [2], [3].


At the moment you can only gather such statistics for AMD Opteron  
but it shouldn't be difficult to port it to other CPUs after a bit  
of browsing around the PAPI docs. Installing PAPI requires  
installing a linux kernel driver though, so it is not for the faint  
hearted.


Well, AFAIK, PAPI abstracts away the platform dependencies quite  
well, so I guess your code can be run straightforward on all IA-32  
platforms (depending on the events you wish to measure, which may or  
may not be present on all platforms). PowerPC, Itanium, Mips, Alpha  
should work as well, IIRC. If the GHC backend can generate code  
there, that is.




We have used this library to find bottlenecks in the current code  
generation and we have implemented ways of correcting them, so  
expect some good news about this in the future.




Have you published anything about that?

I should get around to start a wiki page about using PAPI these  
days, but meanwhile feel free to contact me if you need further  
information or help.


I've been toying with this idea for a while [4], but never had the  
time to do something with it. If you have some cool stuff, let us  
know. I'm very interested.


-- Andy

[1] Eeckhout, L.; Georges, A.; De Bosschere, K. How Java Programs  
Interact with Virtual Machines at the Microarchitectural Level.  
Proceedings of the 18th Annual ACM SIGPLAN Conference on Object- 
Oriented Programming, Systems, Languages and Applications (OOPSLA  
2003). ACM. 2003. pp. 169-186
[2] Georges, A.; Buytaert, D.; Eeckhout, L.; De Bosschere, K. Method- 
Level Phase Behavior in Java Workloads. Proceedings of the 19th ACM  
SIGPLAN Conference on Object-Oriented Programming Systems, Languages  
and Applications. ACM Press. 2004. pp. 270-287
[3] Georges, A.; Eeckhout, L.; De Bosschere, K. Comparing Low-Level  
Behavior of SPEC CPU and Java Workloads. Proceedings of the Advances  
in Computer Systems Architecture: 10th Asia-Pacific Conference, ACSAC  
2005. Springer-Verlag GmbH. Lecture Notes in Computer Science. Vol.  
3740. 2005. pp. 669-679

[4] http://sequence.complete.org/node/68
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: A suggestion for the next high profile Haskell project

2006-12-18 Thread Andy Georges

Hi,

I have to dispute this Bulat's characterisation here. We can solve  
lots

of nice problems and have high performance *right now*. Particularly
concurrency problems, and ones involving streams of bytestrings.
No need to leave the safety of GHC either, nor resort to low level  
evil

code.


let's go further in this long-term discussion. i've read Shootout  
problems

and concluded that there are only 2 tasks which speed is dependent on
code-generation abilities of compiler, all other tasks are  
dependent on
speed of used libraries. just for example - in one test TCL was  
fastest
language. why? because this test contained almost nothing but 1000  
calls to
the regex engine with very large strings and TCL regex engine was  
fastest


Maybe it would not be a bad idea to check the number of cache misses,  
branch mispredictions etc. per instruction executed for the shootout  
apps, in different languages, and of course, in haskell, on the  
platforms ghc targets. Do you think it might potentially be  
interesting to the GHC developing community to have such an overview?  
It might show potential bottlenecks, I think.


-- Andy


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Aim Of Haskell

2006-12-12 Thread Andy Georges

Hi,


Actually, the more I think of it, the more I think we should rename
the language altogether. It seems like people say Haskell with
stress on the first syllable if they were either on the committee or
learned it inside academia, and Haskell with stress on the second
syllable if they learned it from online sources. And we really don't
need more pronunciation-based class distinctions.


If you'd all speak West-Flemish, the problem would solve itself :-)

We say Haskul -

Has(lle)ul(cer)

At least, that what I think the Oxford dictionary means with its  
pronounciation description.


Maybe we can claim it should be 'has kell', where kell is something  
cool, and no cornflakes.  It has kell.


-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Aim Of Haskell

2006-12-12 Thread Andy Georges

Hi,

On 13 Dec 2006, at 00:17, Joachim Durchholz wrote:


Kirsten Chevalier schrieb:

I think that it would serve this
community well if somebody was able to achieve a better understanding
of the social reasons why some programming languages are adopted and
some aren't. I think all of us already know that the reason isn't
because some are better than others, but it might be time for
someone to go beyond that.


Actually, it's quite simple: following the ideology de jour and  
teaching-relevant support.


Teachers will teach what's mainstream ideology (I'm using  
ideology in a strictly neutral sense here).
Pascal was popular because teachers felt that structured  
programming should be taught to the masses, and you couldn't abuse  
goto in Pascal to make a program unstructured.
Later, universities shifted more towards economic usefulness.  
Which made C (and, later, Java) much more interesting ideologically.


Since the rise of Java, our university has been teaching almost  
nothing else. A short course in C, the FP course is being phased out.  
Some teachers had an interest in having Java knowledgeable kids  
graduating. I guess the industry also asked for Java knowledge in  
general. I think it's sad for the students. A language is sometimes  
more than just syntax, the paradigms it uses should be known, and  
I've seen too many students who have no clue what a pointer is, who  
cannot apply simply things such as map and filter ... I'm no haskell  
wizard, but the very basics I do grok.


Teaching-relevant support means: readily available tools. I.e.  
compilers, debuggers, editor support, and all of this with campus  
licenses or open sourced.



I don't think that Haskell can compete on the ideological front  
right now. That domain is firmly in the area of C/C++/Java. Erlang  
isn't really winning here either, but it does have the advantage of  
being connected to success stories from Ericsson.
To really compete, Haskell needs what people like to call  
industrial-strength: industrial-strength compilers, industrial- 
strength libraries, industrial-strength IDEs. In other words,  
seamless Eclipse and Visual Studio integration, heaps and heaps of  
libraries, and bullet-proof compilers, all of this working right  
out of the box. (I see that this all is being worked on.)


Having a(n important) company backing Haskell in a platform- 
independent way would certainly help, IMHO. But to convince people to  
use it, they need to be taught before they go out to find a job.


-- Andy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Writing Haskell For Dummies Or At Least For People Who Feel Like Dummies When They See The Word 'Monad'

2006-12-11 Thread Andy Georges


On 11 Dec 2006, at 19:35, Kirsten Chevalier wrote:


On 12/11/06, Andrew Wagner [EMAIL PROTECTED] wrote:

Well, perhaps if nothing else, we could use a wikibook to
collaboratively work on the structure of such a book, and then from
that you could publish a real book. I don't really know the legal
issues, though. I am thinking of several books though which have been
written and released both as full paper books, and as free digital
books. Could we do something similar?


I definitely think using a wiki to work on the book would be a good
idea. I just wouldn't want to imply that that meant it would
necessarily be a public wiki or that it would be around forever. The
legal issues are basically that publishers don't want to publish books
that people can get for free off the web (whether or not you agree
with this logic). There are exceptions to this, like Lessig's _Free
Culture_, but it's my impression that they usually involve authors who
have enough sway that publishers will let them get away with whatever
they want.


Well, I know that e.g., Cory Doctorrow puts his books online for  
free, and he seems to have no trouble also getting printed versions  
sold (see for example http://craphound.com/someone/). So I guess it  
should be possible to do. Especially because the demand will be quite  
large, IMO. A collection of real-world examples a la dive into python  
would certainly be on the top of my to buy list.


-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Writing Haskell For Dummies Or At Least For People Who Feel Like Dummies When They See The Word 'Monad'

2006-12-11 Thread Andy Georges

Hi,


I wonder if a similar theme is apropriate for proposed book.
Graphics and sounds give a very direct feedback to the programmer, and
I expect that helps with the motivation.
Perhaps a single largish application could be the end product of the
book. Like a game or something. You'd start off with some examples
early on, and then as quickly as possible start working on the low
level utility functions for the game, moving on to more and more
complex things as the book progresses. You'd inevitably have to deal
with things like performance and other real world tasks.
It might be difficult to find something which would work well, though.



Maybe this idea (ok, isJust) comes to mind because I'm looking around  
at cakephp, which is a rails like framework for PHP, but a real-life  
example could be something like rails. It need not be as extensive or  
fully fledged, but enough such that people can get the hang of things  
and take it from there. That would include DB interaction, web  
interaction, logging, XML and what have you. It might just require  
enough of the more exotic Haskell stuff to get newbies up to speed.


Details can be tackled either when they arise or deferred to an  
appendix, if they bloat the actual issues that is being explained.


Just my €.02

-- Andy

PS. I still belong somewhat to the latter category of the subject.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Aim Of Haskell

2006-12-10 Thread Andy Georges

Hi,

one particular thing that we still lack is something like book  
Haskell in

real world


We need a 'Dive into Haskell' book.

-- Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Trace parser

2006-07-10 Thread Andy Georges

Hi Lemmih,



Have you tried profiling the code?
You can find a guide to profiling with GHC here:
http://www.haskell.org/ghc/docs/latest/html/users_guide/profiling.html


I did that ... it shows that updateState is retaining most data (-hr  
switch), as well as updateMap, which is increasing it's retained set  
towrd the end, whereas the updateState simply rocks off to high  
levels and then gradually descends. I'm not sure how to fix that.  
Obviously, the methodStack will grow and shrink up to the depth of  
the execution stack of my application, but that should be about it.  
the System stack is also quite big as far as retained data goes,  
declining quite slowly up to the end of the execution.


My gut feeling tells me that I should make sure the update of the  
state is actually evaluated and not simply kept around. But I've no  
idea how to get that to happen.


-- Andy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Trace parser

2006-07-06 Thread Andy Georges

Hello,

I'm looking for a bit of help (ok, a lot) the speed up my program  
which I use to build a calltree out of an annotated program execution  
trace. To give you an idea about the sluggishness at the moment, for  
a trace containing 70MB, it has been running for about 10 hours  
straight (AMD Athlon XP (Barton) 2600+).


The trace contains lines made up of a number of fields:

C 4 1000 100
C 4 1001 1000200
R 4 1001 1003045
R 4 1000 1003060

C indicates a function entrypoint (call), R indicates a function  
exitpoint (return). The second field indicates which thread is  
executing the function, the third field denotes the function id, the  
last field contains a performance counter value. As you can see,  
numbering each line with a pre-order and a post-order number yields a  
list that can be transformed easily into a tree, which can then be  
manipulated. The first goal is to build the tree. This is done in the  
following code:



data ParserState = ParserState { methodStack :: !ThreadMap
   , methodQueue :: !ThreadMap
   , pre :: !Integer
   , post:: !Integer
   , methodMap   :: !MethodMap
   , currentThread :: !Integer
   } deriving (Show)

initialParserState :: ParserState
initialParserState = ParserState e e 0 0 e 0
  where e = M.empty :: Map Integer a

readInteger :: B.ByteString - Integer
readInteger = fromIntegral . fst . fromJust . B.readInt


parseTraceMonadic :: [B.ByteString] - ParserState
parseTraceMonadic ss = state { methodQueue = M.map reverse  
(methodQueue state) }
  where state = execState (mapM_ (\x - modify (updateState x)   
get = (`seq` return ())) ss) initialParserState



updateState :: B.ByteString - ParserState - ParserState
updateState s state = case (B.unpack $ head fields) of
  M - updateStateMethod fields state
  E - updateStateException  fields state
  C - updateStateEntry  fields state
  R - updateStateExit   fields state
  where fields = B.splitWith (== ' ') s


updateStateMethod :: [B.ByteString] - ParserState - ParserState
updateStateMethod (_:methodId:methodName:_) state = state { methodMap  
= M.insert (readInteger methodId) methodName (methodMap state) }


updateStateException :: [B.ByteString] - ParserState - ParserState
updateStateException _ state = state

updateStateEntry :: [B.ByteString] - ParserState - ParserState
updateStateEntry (_:ss) state = {-Debug.Trace.trace (before:  ++  
(show state) ++ \nafter:  ++ (show newstate)) $-} newstate
  where newstate = state { methodStack = updateMap thread  
(methodStack state) (\x y - Just (x:y)) (pre state, 0, method)

  , pre = ((+1) $! (pre state))
  }
method = mkMethod (Prelude.map B.unpack ss)
thread = Method.thread method

updateStateExit :: [B.ByteString] - ParserState - ParserState
updateStateExit (_:ss) state = {-Debug.Trace.trace (before:  ++  
(show state)) $-} case updateMethod m (Prelude.map B.unpack ss) of
   Just um - state { methodStack =  
M.update (\x - Just (tail x)) thread (methodStack state)
, methodQueue =  
updateMap thread (methodQueue state) (\x y - Just (x:y)) (pre_, post  
state, um)
, post = ((+1)  
$! (post state))

}
   Nothing - error $ Top of the  
stack is mismatching! Expected  ++ (show m) ++  yet got  ++ (show  
ss) ++ \n ++ (show state)

  where method = mkMethod (Prelude.map B.unpack ss)
thread = Method.thread method
(pre_, _, m) = case M.lookup thread (methodStack state) of
  Just stack - head stack
  Nothing- error $ Method stack has  
not been found for thread  ++ (show thread) ++  - fields:  ++  
(show ss)



updateMap key map f value = case M.member key map of
  True  - M.update (f value) key map
  False - M.insert key [value] map

As you can see, the state is updated for each entry, a stack being  
maintained with methods we've seen up to now, and a list with methods  
that have received both pre and post order numbers, and of which both  
the entry and exit point have been parsed. I am using a ByteString,  
because using a plain String is causing the program to grab far too  
much heap.


The mkMethod yields a Method like this:


data Method = Method { mid :: Integer
 , thread :: Integer
 , instruction_entry :: Integer
 , instruction_exit :: Integer
 } deriving (Eq, Show)

eM = Method 0 0 0 0

mkMethod :: [String] - Method
mkMethod s = let [_thread, _id, _entry] = take 3 $ 

Re: [Haskell-cafe] Can't explain this error

2005-07-12 Thread Andy Georges


On 12 Jul 2005, at 14:39, Dinh Tien Tuan Anh wrote:


parts 0 = [[]]
parts x = [concat (map (y:) parts(x-y) | y-[1..(x `div` 2)]]


First of all ... there is a ) missing ... I guess the line should read

parts x = [concat (map (y:) parts(x-y)
  )
 | y-[1..(x `div` 2)]]

?

-- Andy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can't explain this error

2005-07-11 Thread Andy Georges


On 11 Jul 2005, at 17:37, Dinh Tien Tuan Anh wrote:


sumHam :: Integer - Float
sumHam n = sum [1/x | x-[1..n]]


Try this:

sumHam :: Integer - Float
sumHam n = sum [1.0/(fromIntegral x) | x-[-1..n]]


-- Andy

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Solution to Thompson's Exercise 4.4

2005-03-12 Thread Andy Georges
Hi all,

 when this example occurs in the text the new Haskell coder has not been
 introduced to most of what you suggest.

I didn't realise that. All apologies. 

mvg,
Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Solution to Thompson's Exercise 4.4

2005-03-11 Thread Andy Georges
Hi Kaoru,

 I have been working through the exercises in Thompson's The Craft of
 Functional Programming 2nd Ed book. I am looking for a solution web
 site for Thompson's book. Or maybe the people here can help.

 In exercise 4.4, I am asked to define a function

  howManyOfFourEqual :: Int - Int - Int - Int - Int

 which returns the number of integers that are equal to each other. For
 example,

  howManyOfFourEqual 1 1 1 1 = 4
  howManyOfFourEqual 1 2 3 1 = 2
  howManyOfFourEqual 1 2 3 4 = 0

A solution which is applicable to any number of arguments is this:

import Data.List
howManyOfFourEqual a b c d = determineMaxEquals [a,b,c,d]

determineMaxEquals :: [a] - Int
determineMaxEquals ls = head $ reverse $ sort $ map length $ group $ sort ls

Of course, determineMaxEquals is fubar if used on an infinite list.

Regards,
Andy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe