Re: [Haskell-cafe] Is there still at list of active Haskell projects

2011-09-24 Thread David Peixotto
The Haskell Communities and Activities Report it another good source to check 
for active Haskell projects.

http://www.haskell.org/haskellwiki/Haskell_Communities_and_Activities_Report

On Sep 24, 2011, at 4:42 PM, Vasili I. Galchin wrote:

 Hello,
  
 On http://www.haskell.org I didn't see a list of active Haskell projects. 
 ??
  
  
 Thanks,
  
 Bill
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cap 3: stopping thread 3 (stackoverflow)

2011-06-07 Thread David Peixotto
GHC starts threads with a small stack size to efficiently support lightweight 
concurrency. As a thread uses more stack space, it will be expanded as needed 
up to some maximum fixed size. I think these stack overflow events you see are 
the runtime expanding the thread stacks. 

You can adjust the initial and maximum stack sizes using the -k (initial) and 
-K (max) RTS options.

Quoting from the GHC users guide 
(http://www.haskell.org/ghc/docs/7.0-latest/html/users_guide/runtime-control.html#setting-rts-options):

-ksize
[Default: 1k] Set the initial stack size for new threads. Thread stacks 
(including the main thread's stack) live on the heap, and grow as required. The 
default value is good for concurrent applications with lots of small threads; 
if your program doesn't fit this model then increasing this option may help 
performance.

The main thread is normally started with a slightly larger heap to cut down on 
unnecessary stack growth while the program is starting up.

-Ksize
[Default: 8M] Set the maximum stack size for an individual thread to size 
bytes. This option is there purely to stop the program eating up all the 
available memory in the machine if it gets into an infinite loop.

On Jun 7, 2011, at 3:55 AM, Johannes Waldmann wrote:

 
 As a workaround, you can use the show-ghc-events binary that is
 provided by the ghc-events package.
 
 Thanks, I wasn't aware of that. 
 
 Are the following lines normal for an eventlog?
 
 ...
 1877298000: cap 1: waking up thread 4 on cap 1
 1877299000: cap 1: thread 4 is runnable
 1877305000: cap 6: thread 4 is runnable
 1877306000: cap 1: migrating thread 4 to cap 6
 1877334000: cap 1: running thread 16
 1877345000: cap 6: running thread 4
 1877348000: cap 6: stopping thread 4 (thread finished)
 1877428000: cap 3: stopping thread 14 (stack overflow)
 1877428000: cap 3: running thread 14
 1877501000: cap 1: stopping thread 16 (stack overflow)
 1877503000: cap 1: running thread 16
 1877606000: cap 3: stopping thread 14 (stack overflow)
 1877607000: cap 3: running thread 14
 1877658000: cap 1: stopping thread 16 (stack overflow)
 1877659000: cap 1: running thread 16
 1877723000: cap 4: stopping thread 10 (stack overflow)
 1877724000: cap 4: running thread 10
 1877769000: cap 3: stopping thread 14 (stack overflow)
 18: cap 3: running thread 14
 ...
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DSL for task dependencies

2011-03-17 Thread David Peixotto
Hi Serge,

You may be thinking of the Shake DSL presented by Neil Mitchell at last years 
Haskell Implementers Workshop. Slides and video are available from: 
http://haskell.org/haskellwiki/HaskellImplementorsWorkshop/2010

Max Bolingbroke has an open source implementation available here: 
https://github.com/batterseapower/openshake

Hope that helps.

-David

On Mar 17, 2011, at 3:00 PM, Serge Le Huitouze wrote:

 Hi Haskellers!
 
 I think I remember reading a blog post or web page describing a
 EDSL to describe tasks and their dependencies a la make.
 
 Can anyone point me to such published material?
 
 Thanks in advance.
 
 --serge
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] DPH and GHC 7.0.1

2010-11-19 Thread David Peixotto
There were some problems getting DPH to work well with the changes in GHC 7. 
There is more info in this mail:

http://www.haskell.org/pipermail/cvs-ghc/2010-November/057574.html

The short summary is that there will be a patch level release of GHC (7.0.2) 
that works well with DPH and the DPH packages will then be available for 
installation from Hackage. 

If you want to play with DPH now you can do so on GHC HEAD.

-David

On Nov 19, 2010, at 5:26 PM, Jake McArthur wrote:

 On 11/19/2010 05:24 PM, Gregory Propf wrote:
 I was hoping to play around with Data.Parallel.Haskell (dph) but noticed
 that it seems to have been exiled from ghc 7.0.1 which I just installed.
 It also doesn't seem to be in cabal. Anybody know how to use dph with
 7.0.1 or has it been abandoned or something?
 
 It's not abandoned. The library components have been separated from GHC. I'm 
 sure the intent is to put it on Hackage.
 
 - Jake
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANNOUNCE: The Fibon benchmark suite (v0.2.0)

2010-11-12 Thread David Peixotto

Hi Jason,

Sorry for the delayed response. Thanks for pointing out the darcs-benchmark
package. I had not seen that before and there may be some room for sharing
infrastructure. Parsing the runtime stats is pretty easy, but comparing
different runs, computing statistics, and generating tables should be a
common task.

On a related note, when I uploaded the fibon package, I put it in a new
Benchmarking category as opposed to the existing Testing category. In my
mind testing is more for correctness and benchmarking is for performance. I
think it would be useful to include other benchmarking packages
(darcs-benchmark, criterion) in that category.



--
From: Jason Dagit da...@codersbase.com
Sent: Tuesday, November 09, 2010 7:58 PM
To: David Peixotto d...@rice.edu
Cc: hask...@haskell.org; haskell-cafe@haskell.org
Subject: Re: [Haskell] ANNOUNCE: The Fibon benchmark suite (v0.2.0)


On Tue, Nov 9, 2010 at 5:47 PM, David Peixotto d...@rice.edu wrote:



On Nov 9, 2010, at 3:45 PM, Jason Dagit wrote:

I have a few questions:
  * What differentiates fibon from criterion?  I see both use the
statistics package.


I think the two packages have different benchmarking targets.

Criterion allows you to easily test individual functions and gives some
help with benchmarking in the presence of lazy evaluation. If some code
does
not execute for a long time it will run it multiple times to get sensible
timings. Criterion does a much more sophisticated statistical analysis of
the results, but I hope to incorporate that into the Fibon analysis in
the
future.

Fibon is a more traditional benchmarking suite like SPEC or nofib. My
interest is using it to test compiler optimizations. It can only
benchmark
at the whole program level by running an executable. It checks that the
program produces the correct output, can collect extra metrics generated
by
the program, separates collecting results from analyzing results, and
generates tables directly comparing the results from different benchmark
runs.

  * Does it track memory statistics?  I glanced at the FAQ but didn't see
anything about it.


Yes, it can read memory statistics dumped by the GHC runtime. It has
built
in support for reading the stats dumped by `+RTS -t --machine-readable`
which includes things like bytes allocated and time spent in GC.



Oh, I see.  In that case, it's more similar to darcs-benchmark.  Except
that
darcs-benchmark is tailored specifically at benchmarking darcs.  Where
they
overlap is parsing the RTS statistics, running the whole program, and
tabular reports.  Darcs-benchmark adds to that an embedded DSL for
specifying operations to do on the repository between benchmarks (and
translating those operations to runnable shell snippets).

I wonder if Fibon and darcs-benchmark could share common infrastructure
beyond the statistics package.  It sure sounds like it to me.  Perhaps
some
collaboration is in order.



  * Are the numbers in the sample output seconds or milliseconds?  What
is
the stddev (eg., what does the distribution of run-times look like)?


I'm not sure which results you are referring to exactly (the numbers in
the
announcement were lines of code). I picked benchmarks that all ran for at
least a second (and hopefully longer) with compiler optimizations
enabled.
On an 8-core Xeon, the median time over all benchmarks is 8.43 seconds,
mean
time is 12.57 seconds and standard deviation is 14.56 seconds.



I probably read your email too fast, sorry.  Thanks for the clarification.

Thanks,
Jason


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANNOUNCE: The Fibon benchmark suite (v0.2.0)

2010-11-09 Thread David Peixotto
I'm pleased to announce the release of the Fibon benchmark tools and suite.

Fibon is a set of tools for running and analyzing benchmark programs in
Haskell. Most importantly, it includes an optional set of benchmark
programs including many programs taken from the Hackage open source
repository.

The source code for the tools and benchmarks are available on github

  https://github.com/dmpots/fibon
  http://github.com/dmpots/fibon-benchmarks

The Fibon tools (without the benchmarks) are available on hackage.

  http://hackage.haskell.org/package/fibon

The package needs to be unpacked and built in place to be able to run any
benchmarks. It can be used with the official Fibon benchmarks or you can
create your own suite and just use Fibon to run and analyze your benchmark
programs.

Some more documentation is available on the fibon wiki

  https://github.com/dmpots/fibon/wiki

Fibon Tools
===
Fibon is a pure Haskell framework for running and analyzing benchmark
programs. Cabal is used for building the benchmarks. The benchmark
harness, configuration files, and benchmark descriptions are all written in
Haskell. The benchmark descriptions and run configurations are all statically
compiled into the benchmark runner to ensure that configuration errors are
found at compile time.

The Fibon tools are not tied to any compiler infrastructure and can build
benchmarks using any compiler supported by cabal. However, there are some
extra features available when using GHC to build the benchmarks:

  * Support in config files for using an inplace GHC HEAD build
  * Support in `fibon-run` for collecting GC stats from GHC compiled programs
  * Support in `fibon-analyse` for reading GC stats from Fibon result files

The Fibon Benchmark Suite
===
The Fibon benchmark suite currently contains 34 benchmarks from a variety of
sources. The individual benchmarks and lines of code are given below.

Dph
  _DphLib316
  Dotp   308
  Qsort  236
  QuickHull  680
  Sumsq   72
  --
  TOTAL 1612

Hackage
  Agum   786
  Bzlib  432
  Cpsa 11582
  Crypto4486
  Fgl   3834
  Fst   4532
  Funsat   16085
  Gf   23970
  HaLeX 4035
  Happy 5833
  Hgalib 819
  Palindromes496
  Pappy 7313
  QuickCheck4495
  Regex 6873
  Simgi 5134
  TernaryTrees   722
  Xsact 2783
  --
  TOTAL   104210

Repa
  _RepaLib  8775
  Blur77
  FFT2d   89
  FFT3d  103
  Laplace274
  MMult  133
  --
  TOTAL 9451

Shootout
  BinaryTrees 63
  ChameneosRedux  96
  Fannkuch27
  Mandelbrot  68
  Nbody  192
  Pidigits26
  SpectralNorm97
  --
  TOTAL  569

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [Haskell] ANNOUNCE: The Fibon benchmark suite (v0.2.0)

2010-11-09 Thread David Peixotto

On Nov 9, 2010, at 3:45 PM, Jason Dagit wrote:
 I have a few questions:
   * What differentiates fibon from criterion?  I see both use the statistics 
 package.

I think the two packages have different benchmarking targets.

Criterion allows you to easily test individual functions and gives some help 
with benchmarking in the presence of lazy evaluation. If some code does not 
execute for a long time it will run it multiple times to get sensible timings. 
Criterion does a much more sophisticated statistical analysis of the results, 
but I hope to incorporate that into the Fibon analysis in the future.

Fibon is a more traditional benchmarking suite like SPEC or nofib. My interest 
is using it to test compiler optimizations. It can only benchmark at the whole 
program level by running an executable. It checks that the program produces the 
correct output, can collect extra metrics generated by the program, separates 
collecting results from analyzing results, and generates tables directly 
comparing the results from different benchmark runs.

   * Does it track memory statistics?  I glanced at the FAQ but didn't see 
 anything about it.

Yes, it can read memory statistics dumped by the GHC runtime. It has built in 
support for reading the stats dumped by `+RTS -t --machine-readable` which 
includes things like bytes allocated and time spent in GC.

   * Are the numbers in the sample output seconds or milliseconds?  What is 
 the stddev (eg., what does the distribution of run-times look like)?

I'm not sure which results you are referring to exactly (the numbers in the 
announcement were lines of code). I picked benchmarks that all ran for at least 
a second (and hopefully longer) with compiler optimizations enabled. On an 
8-core Xeon, the median time over all benchmarks is 8.43 seconds, mean time is 
12.57 seconds and standard deviation is 14.56 seconds.

-David

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Will GHC 6.14 with LLVM use LLVM C compiler to compile external C Libraries

2010-09-09 Thread David Peixotto
I'm not sure using Clang would make it any *easier* to use external sources, 
but it could provide opportunities for optimizing across the C/Haskell 
boundary. The main difficulty in getting it all working correctly is the 
linking step. The Mac OSX linker can [link together llvm bitcode][1] for 
link-time optimization, but the support on Linux is less mature. You have to 
use the [gold linker][2] if you want to optimize bitcode at link time.

I made an attempt in May to compile the GHC runtime with Clang. The process is 
documented in this [blog post][3]. I was interested in using LLVM's link-time 
optimization to optimize parts of the runtime together with a compiled Haskell 
program. While I never got that far, just getting the GHC runtime to compile 
with Clang was a bit difficult. It uses some GCC specific extensions (pinned 
global registers, and __thread for thread local data) that did not work well 
with Clang. In particular, lack of support for these two extensions made it 
impossible to compile the threaded runtime.

I think it would be very interesting to see what kind of performance benefits 
we could get from using the LLVM backend in GHC to link with LLVM bitcode 
generated by Clang.

[1] 
http://developer.apple.com/library/mac/releasenotes/DeveloperTools/RN-llvm-gcc/index.html#//apple_ref/doc/uid/TP40007133-CH1-SW14
[2] http://llvm.org/docs/GoldPlugin.html
[3] http://www.dmpots.com/blog/2010/05/08/building-ghc-with-clang.html

On Sep 9, 2010, at 7:10 AM, Mathew de Detrich wrote:

 Since GHC 6.14 will (hopefully) be use LLVM as a default backend, an idea has 
 occured to me
 
 Should GHC also use the clang (C/C++-LLVM compiler) on external C library 
 sources which are used with certain Haskell packages (such as gtk) when LLVM 
 does become a default backend for GHC. The consensus is that since Clang will 
 also produce LLVM 'assembler', it can be very easily linked with the LLVM 
 'assembler' produced by GHC's LLVM backend, making the process of using 
 external C sources a lot easier. Parts of Clang required could even be 
 integrated into GHC (although this may be tricky since its coded in C++). It 
 should also hopefully make using Haskell packages on windows that use C 
 sources less painful
 
 Clang could also make using FFI with C++ much easier (for reasons stated 
 above)
 
 Thoughts?
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CnC Haskell

2010-06-25 Thread David Peixotto
There is a reference for the CnC grammar in the repository for the .NET 
implementation. 

http://github.com/dmpots/CnC.NET/blob/master/CnC.NET/CnC.NET/cnc.grammar

The parser specification for fsyacc (the F# YACC implementation) is here:

http://github.com/dmpots/CnC.NET/blob/master/CnC.NET/CnC.NET/Parser.fsy

The textual representation is still in flux a bit, but this grammar should be 
enough of a guide for implementing a parser in Haskell. The grammar is left 
recursive, so using a parser generator like Happy would be a good choice.

The textual representation will actually be a bit different depending on the 
underlying language since the types of items stored in a collection is part of 
the description. For example in C, an item collection that stores an array of 
ints would be declared like:

[int* A];

but in Haskell we would want to write something like

[Array Int Int A];

I think dealing with type declarations would in the textual representation 
would be the main difference in implementing the parser in Haskell. Once the 
textual representation has been parsed to an AST it should be possible to 
generate the Haskell code that builds the graph using the haskell-cnc package.

-David

On Jun 23, 2010, at 3:56 PM, Vasili I. Galchin wrote:

 
 
 On Wed, Jun 23, 2010 at 3:47 PM, Don Stewart d...@galois.com wrote:
 vigalchin:
  Hello,
 
   I have been reading work done at Rice University:  http://
  habanero.rice.edu/cnc. Some work has been done by http://www.cs.rice.edu/
  ~dmp4866/ on CnC for .Net. One component that David wrote a CnC translator 
  that
  translates CnC textual form to the underlying language, e.g. F#. Is anybody
  working on a CnC textual form translator for Haskell so a Haskell user of 
  CnC
  Haskell can write in a higher level??
 
 Ah, so by a translator from high level CnC form to this:
 

 http://hackage.haskell.org/packages/archive/haskell-cnc/latest/doc/hml/Intel-Cnc.html
 
^^ exactly what I mean
  
 ? Do you have a reference for the CnC textual form?
  ^^ if you mean something like a context-free grammatical 
 definition of the CnC textual form ,,, the answer is I haven't seen such a 
 reference.
 
 V.
 
 
 
 -- Don
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe