Re: [Haskell-cafe] [ANN] lvish 1.0 -- successor to monad-par

2013-10-02 Thread Ben Gamari
Ryan Newton rrnew...@gmail.com writes:

 Hi all,

 I'm pleased to announce the release of our new parallel-programming
 library, LVish:

 hackage.haskell.org/package/lvish

 It provides a Par monad similar to the monad-par package, but generalizes
 the model to include data-structures other than single-assignment variables
 (IVars).  For example, it has lock-free concurrent data structures for Sets
 and Maps, which are constrained to only grow monotonically during a given
 runPar (to retain determinism).  This is based on work described in our
 upcoming POPL 2014 paper:

Do you have any aidea why the Haddocks don't yet exist. If I recall
correctly, under Hackage 1 the module names wouldn't be made links until
Haddock generation had completed. Currently the lvish modules' point to
non-existent URLs.

Also, is there a publicly accessible repository where further
development will take place?

Cheers,

- Ben



pgppP5lQOaZx3.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Proposal: Pragma EXPORT

2013-09-17 Thread Ben Gamari
Ivan Lazar Miljenovic ivan.miljeno...@gmail.com writes:

 On 17 September 2013 09:35, Evan Laforge qdun...@gmail.com wrote:

snip

 None of this is a big deal, but I'm curious about other's opinions on
 it.  Are there strengths to the separate export list that I'm missing?

 I do like the actual summary aspect as you've noted, as I can at
 times be looking through the actual code rather than haddock
 documentation when exploring new code (or even trying to remember what
 I wrote in old code).

The summary of functionality that the export list provides is a very
nice feature that I often miss in other languages.

That being said, it brings up a somewhat related issue that may become
increasingly problematic with the rising use of libraries such as
lens: exporting members defined by Template Haskell. While we have nice
sugar for exporting all accessors of a record (MyRecord(..)), we have no
way to do the same for analogous TH-generated members such as
lenses. Instead, we require that the user laboriously list each member
of the record, taking care not to forget any.

One approach would be to simply allow TH to add exports as presented in
Ticket #1475 [1]. I can't help but wonder if there's another way, however. 
One (questionable) option would be to allow globbing patterns in export
lists.

Another approach might be to introduce some notion of a name list which
can appear in the export list. These lists could be built up by either
user declarations in the source module or in Template Haskell splices
and would serve as a way to group logically related exports. This
would allow uses such as (excuse the terrible syntax),

module HelloWorld ( namelist MyDataLenses
  , namelist ArithmeticOps
  ) where

import Control.Lens

data MyData = MyData { ... }
makeLenses ''MyDataLenses
-- makeLenses defines a namelist called MyDataLenses

namelist ArithmeticOps (add)
add = ...

namelist ArithmeticOps (sub)
sub = ...

That being said, there are a lot of reasons why we wouldn't want to
introduce such a mechanism,

  * we'd give up the comprehensive summary that the export list
currently provides

  * haddock headings already provides a perfectly fine means for
grouping logically related exports

  * it's hard to envision the implementation of such a feature without
the introduction of new syntax

  * there are arguably few uses for such a mechanism beyond exporting TH
constructs

  * you still have the work of solving the issues presented in #1475

Anyways, just a thought.

Cheers,

- Ben


[1] http://ghc.haskell.org/trac/ghc/ticket/1475


pgpoddlH92BBj.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] name lists

2013-09-17 Thread Ben Gamari
Roman Cheplyaka r...@ro-che.info writes:

 * Ben Gamari bgamari.f...@gmail.com [2013-09-17 10:03:41-0400]
 Another approach might be to introduce some notion of a name list which
 can appear in the export list. These lists could be built up by either
 user declarations in the source module or in Template Haskell splices
 and would serve as a way to group logically related exports. This
 would allow uses such as (excuse the terrible syntax),

 Hi Ben,

 Isn't this subsumed by ordinary Haskell modules, barring the current
 compilers' limitation that modules are in 1-to-1 correspondence with
 files (and thus are somewhat heavy-weight)?

 E.g. the above could be structured as

   module MyDataLenses where
 data MyData = MyData { ... }
 makeLenses ''MyData

   module HelloWorld (module MyDataLenses, ...) where
 ...

True. Unfortunately I've not seen much motion towards relaxing this
limitation[1].

Cheers,

- Ben


[1] http://ghc.haskell.org/trac/ghc/ticket/2551


pgpjfzeZUOpNI.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Ideas on a fast and tidy CSV library

2013-07-23 Thread Ben Gamari
Justin Paston-Cooper paston.coo...@gmail.com writes:

 Dear All,

 Recently I have been doing a lot of CSV processing. I initially tried to
 use the Data.Csv (cassava) library provided on Hackage, but I found this to
 still be too slow for my needs. In the meantime I have reverted to hacking
 something together in C, but I have been left wondering whether a tidy
 solution might be possible to implement in Haskell.

Have you tried profiling your cassava implementation? In my experience
I've found it's quite quick. If you have an example of a slow path I'm
sure Johan (cc'd) would like to know about it.

 I would like to build a library that satisfies the following:

 1) Run a function f :: a_1 - ... - a_n - m (Maybe (b_1, ..., b_n)),
 with m some monad and the as and bs being input and output.

 2) Be able to specify a maximum record string length and output record
 string length, so that the string buffers used for reading and outputting
 lines can be reused, preventing the need for allocating new strings for
 each record.

 3) Allocate only once, the memory where the parsed input values, and output
 values are put.

Ultimately this could be rather tricky to enforce. Haskell code
generally does a lot of allocation and the RTS is well optimized to
handle this.

I've often found that trying to shoehorn a non-idiomatic optimal
imperative approach into Haskell produces worse performance than the
more readable, idiomatic approach.

I understand this leaves many of your questions unanswered, but I'd give
the idiomatic approach a bit more time before trying to coerce C into
Haskell. Profile, see where the hotspots are and optimize
appropriately. If the profile has you flummoxed, the lists and #haskell
are always willing to help given the time.

Cheers,

- Ben



pgpp3Fd9RzpaG.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Array, Vector, Bytestring

2013-06-03 Thread Ben Gamari
Artyom Kazak y...@artyom.me writes:

 silvio silvio.fris...@gmail.com писал(а) в своём письме Mon, 03 Jun 2013  
 22:16:08 +0300:

 Hi everyone,

 Every time I want to use an array in Haskell, I find myself having to  
 look up in the doc how they are used, which exactly are the modules I  
 have to import ... and I am a bit tired of staring at type signatures  
 for 10 minutes to figure out how these arrays work every time I use them  
 (It's even worse when you have to write the signatures). I wonder how  
 other people perceive this issue and what possible solutions could be.

 Recently I’ve started to perceive this issue as “hooray, we have lenses  
 now, a generic interface for all the different messy stuff we have”. But  
 yes, the inability to have One Common API for All Data Structures is  
 bothering me as well.

 Why do we need so many different implementations of the same thing? In  
 the ghc libraries alone we have a vector, array and bytestring package  
 all of which do the same thing, as demonstrated for instance by the  
 vector-bytestring package. To make matters worse, the haskell 2010  
 standard has includes a watered down version of array.

 Indeed. What we need is `text` for strings (and stop using `bytestring`)  
 and reworked `vector` for arrays (with added code from `StorableVector` —  
 basically a lazy ByteString-like chunked array).

To be perfectly clear, ByteString and Text target much different
use-cases and are hardly interchangeable. While ByteString is, as the
name suggests, a string of bytes, Text is a string of characters in a
Unicode encoding. When you are talking about unstructured binary data,
you should most certainly be using ByteString.

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] multivariate normal distribution in Haskell?

2013-04-14 Thread Ben Gamari
Bas de Haas w.b.deh...@uu.nl writes:

 Dear List,

 I’m implementing a probabilistic model for recognising musical chords in
 Haskell. This model relies on a multivariate normal distribution. I’ve
 been searching the internet and mainly hackage for a Haskell library to
 do this for me, but so far I’ve been unsuccessful.

 What I’m looking for is a Haskell function that does exactly what the
 mvnpdf function in matlab does:
 http://www.mathworks.nl/help/stats/multivariate-normal-distribution.html

 Does anyone know a library that can help me out?

As you are likely aware, the trouble with the multivariate normal is the
required inversion of the covariance. If you make assumptions concerning
the nature of the covariance (e.g. force it to be diagonal or low
dimensional) the problem gets much easier. To treat the general, high
dimensional case, you pretty much need a linear algebra library
(e.g. HMatrix) to perform the inversion (and determinant for proper
normalization). Otherwise, implementing the function given the inverse
is quite straightforward.

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type level natural numbers

2013-04-04 Thread Ben Gamari
Mateusz Kowalczyk fuuze...@fuuzetsu.co.uk writes:

 About two weeks ago we got an email (at ghc-users) mentioning that
 comparing to 7.6, 7.7.x snapshot would contain (amongst other things),
 type level natural numbers.

 I believe the package used is at [1].

 Can someone explain what use is such package in Haskell? I understand
 uses in a language such as Agda where we can provide proofs about a
 type and then use that to perform computations using the type system
 (such as guaranteeing that concatenating two vectors together will
 give a new one with the length of the two initial vectors combine)
 however as far as I can tell, this is not the case in Haskell
 (although I don't want to say ?impossible? and have Oleg jump me).

It most certainly will be possible to do type level arithmetic. For one
use-case, see Linear.V from the linear library [1]. The
DataKinds work is already available in 7.6, allowing one to use type
level naturals, but the type checker is unable to unify arithmetic
operations.

Cheers,

- Ben


[1] 
http://hackage.haskell.org/packages/archive/linear/1.1.1/doc/html/Linear-V.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSoC Project Proposal: Markdown support for Haddock

2013-04-04 Thread Ben Gamari
Johan Tibell johan.tib...@gmail.com writes:

 On Thu, Apr 4, 2013 at 9:49 AM, Johan Tibell johan.tib...@gmail.com wrote:
 I suggest that we implement an alternative haddock syntax that's a
 superset of Markdown. It's a superset in the sense that we still want
 to support linkifying Haskell identifiers, etc. Modules that want to
 use the new syntax (which will probably be incompatible with the
 current syntax) can set:

 {-# HADDOCK Markdown #-}

 Let me briefly argue for why I suggested Markdown instead of the many
 other markup languages out there.

 Markdown has won. Look at all the big programming sites out there,
 from GitHub to StackOverflow, they all use a superset of Markdown. It
 did so mostly (in my opinion) because it codified the formatting style
 people were already using in emails and because it was pragmatic
 enough to include HTML as an escape hatch.

For what it's worth, I think Markdown is a fine choice for very much the
same reason. RST has some nice properties (especially for documenting
Python), but Markdown is much more common. Moreover, I've always found
RST's linkification syntax a bit awkward.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Taking over ghc-core

2012-11-10 Thread Ben Gamari
Shachaf Ben-Kiki shac...@gmail.com writes:

 With Don Stewart's blessing
 (https://twitter.com/donsbot/status/267060717843279872), I'll be
 taking over maintainership of ghc-core, which hasn't been updated
 since 2010. I'll release a version with support for GHC 7.6 later
 today.

Thanks! I was needing ghc-core a few weeks ago. It'll be nice to see
this project maintained again.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Deriving settings from command line, environment, and files

2012-11-01 Thread Ben Gamari
David Thomas davidleotho...@gmail.com writes:

 Is there a library that provides a near-complete solution for this?
 I looked around a bit and found many (many!) partial solutions on hackage,
 but nothing that really does it all.  In coding it up for my own projects,
 however, I can't help but feel like I must be reinventing the wheel.

 What I want is something that will process command line options (this
 seems to be where most packages are targetted), environment variables,
 and settings files (possibly specified in options), and override/default
 appropriately.

 Did I miss something?

Do you have an example of a library in another language that does what
you are looking for? boost's program_options library can handle
arguments read from a configuration file, but that's the closest example
I can come up with.

The design space for command line parsing libraries alone
is pretty large; when one adds in configuration file and environment
variable parsing, option overriding, and the like it's downright
massive. Moreover, programs vary widely in their configuration
requirements; it seems to me that it would be very difficult to hit a
point in this space which would be usable for a sufficiently large
number of programs to be worth the effort. This is just my two cents,
however.

If I were you, I'd look into building something on top of an existing
option parsing library. I think optparse-applicative is particularly
well suited to this since it nicely separates the definition of the
options from the data structure used to contain them and doesn't rely on
template haskell (unlike cmdargs). Composing this with configuration
file and environment variable parsing seems like it shouldn't be too
tough.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Adding custom events to eventlog

2012-10-19 Thread Ben Gamari
Janek S. fremenz...@poczta.onet.pl writes:

 Dear list,

 I'm using ThreadScope to improve performance of my parallel program. It would 
 be very helpful for 
 me if I could place custom things in eventlog (e.g. now function x begins). 
 Is this possible?

Yes, it certainly is possible. Have a look at Debug.Trace.traceEvent and
traceEventIO. I have found these to be a remarkably powerful tool for
understanding parallel performance.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Possible bug in Criterion or Statistics package

2012-09-30 Thread Ben Gamari
Aleksey Khudyakov alexey.sklad...@gmail.com writes:

 On 13.08.2012 19:43, Ryan Newton wrote:
 Terrible!  Quite sorry that this seems to be a bug in the monad-par library.

 I'm copying some of the other monad-par authors and we hopefully can get
 to the bottom of this.  If it's not possible to create a smaller
 reproducer, is it possible to share the original test that triggers this
 problem?  In the meantime, it's good that you can at least run without
 parallelism.

 Here is slightly simplified original test case. By itself program is 
 very small but there is statistics and criterion on top of the monad-par
 Failure occurs in the function 
 Statistics.Resampling.Bootstrap.bootstrapBCA. However I couldn't trigger 
 bug with mock data.

Has there been any progress or an official bug report on this?

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is it worth adding Gaussian elimination and eigenvalues to REPA?

2012-08-31 Thread Ben Gamari
KC kc1...@gmail.com writes:

 I realize if one wants speed you probably want to use the hMatrix
 interface to GSL, BLAS and LAPACK.

 Worth it in the sense of have a purely functional implementation.

I, for one, have needed these in the past and far prefer Repa's
interface to that of hMatrix. I considered implementing these myself but
I doubt that I could write an implementation worthy of using having
relatively little knowledge of this flavor of numerics (stability is a
pain, so I hear).

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage 2 maintainership

2012-06-20 Thread Ben Gamari
Krzysztof Skrzętnicki gte...@gmail.com writes:

 Hi,

 Are there any news how things are going?

Things have been pretty stagnant yet again. I was more than a bit
over-optimistic concerning the amount of time I'd have available to put
into this project. Moreover, the tasks required to get Hackage into a
usable state weren't nearly as clear as I originally thought.

Unfortunately, those who have historically been very active in Hackage
development and would have been responsible for much of the recent work
in advancing Hackage 2 to where it is now have other demands on their
time. My understanding is there is a fair amount of half-completed code
floating around.

 What remains there to be done to get us to Hackage 2?

 I found this list of tickets:
 https://github.com/haskell/cabal/issues?labels=hackage2page=1state=open

 Is there anything more to be done?

This list is definitely a start. One of the issues that was also
realized is the size of the server's memory footprint. Unfortunately
acid-state's requirement that all data either be in memory or have no
ACID guarantees was found to be a bit of a limitation. If I recall
correctly some folks were playing around with restructuring the data
structures a bit to reduce memory usage. I really don't know what
happened to these efforts.

On the other hand, it seems that the test server is still ticking away
at http://hackage.factisresearch.com/ with an up-to-date package index,
so things are looking alright on that front.

At this point, it seems that we are in a situation yet again where
someone just needs to start applying gentle pressure and figure out
where we stand. I'm afraid especially now I simply don't have time to
take on this project in any serious capacity.

Perhaps one way forward would be to propose the project again as a GSoC
project next year. That being said, there is no guarantee that someone
would step up to finish it.

Just my two cents.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell (GHC 7) on ARM

2012-06-10 Thread Ben Gamari


Joshua Poehls jos...@poehls.me writes:

 Hello Ben,

Hello,

Sorry for the latency. I'm currently on vacation in Germany so I haven't
had terribly consistent Internet access.

I've Cc'd haskell-cafe@ as I've been meaning to document my experiences
anyways and your email seems like a good excuse to do this.


 I just got a Raspberry Pi and I'm interested in running Haskell on it. So
 far I'm running Arch Linux ARM and I noticed there is is a GHC package
 available, but it is version 6.12.3.

 Doing research online I saw that you did some work on adding GHCi support
 for ARM in GHC 7.4. I was wondering if you had anything to share related to
 running GHC/GHCI 7.4 on ARM, specifically Arch Linux ARM.


Indeed my ARM linker patch was merged into GHC 7.4.2. This means that
ARM is now a fully supported target platform for GHC, complete with GHCi
support (which is pretty much necessary for many ubuiquitous packages).

My ARM development environment consists of a PandaBoard (and before this
a BeagleBoard) running Linaro's Ubuntu distribution. For this reason,
I'm afraid I won't be of terribly much help in the case of Arch. That
being said, GHC has a wonderful build system which has never given me
trouble (except in cases of my own stupidity). Just installing the
required development packages and following the build instructions on
Wiki[1] is generally quite sufficient. I've also have some notes of my
own[2] where I've collected my own experiences.

As you likely know, GHC doesn't have a native ARM code generator of its
own, instead relying on LLVM for low-level code generation. For this
reason, you'll first need to have a working LLVM build. I have found
that the ARM backend in the 3.0 release is a bit buggy and therefore
generally pull directly from git (HEAD on my development box is
currently sitting at 7750ff1e3cbb which seems to be stable). That being
said, I haven't done much work in this space recently and it's quite
likely that the 3.1 release is better in this respect.

Compiling LLVM is one case where I have found the limited memory of my
BeagleBoard (which is I believe is similar to the Raspberry Pi) to be
quite troublesome. There are cases during linking where over a gigabyte
of memory is necessary. At this stage the machine can sit for hours
thrashing against its poor SD card making little progress. For this
reason, I would strongly recommend that you cross-compile LLVM. The
process is pretty straightforward and I have collected some notes on the
matter[3].

The size of GHC means that in principle one would prefer to
cross-compile. Unfortunately, support[4] for this is very new (and
perhaps not even working/complete[5]?) so we have to make due with what
we have.  The build process itself is quite simple. You will want to be
very careful to ensure that the ghc/libraries/ trees are sitting on the
same branch as your ghc/ tree (with the ghc/sync-all
script). Inconsistency here can lead to very ambiguous build
failures. Otherwise, the build is nearly foolproof (insofar as this word
can describe any build process). The build takes the better part of a
day on my dual-core Pandaboard. I can't recall what the maximum memory
consumption was but I think it might fit
in 512MB.

Lastly, if I recall correctly, you will find that you'll need to first
build GHC 7.0 bootstrapped from 6.12 before you can build 7.4 (which
requires a fairly new stage 0 GHC).

Let the list know if you encounter any issues. I'll try to dust off my
own development environment once I get back to the states next week to
ensure that everything still works. I've been meaning to setup the
PandaBoard as a build slave as Karel's has been failing for some time
now (perhaps you could look into this, Karel?). Moreover, perhaps I can
find a way to produce redistributable binaries to help others get
started.

Cheers,

- Ben



[1] http://hackage.haskell.org/trac/ghc/wiki/Building
[2] http://bgamari.github.com/posts/ghc-llvm-arm.html
[3] http://bgamari.github.com/posts/cross-compiling_llvm.html
[4] http://hackage.haskell.org/trac/ghc/wiki/CrossCompilation
[5] http://www.haskell.org/pipermail/cvs-ghc/2012-February/070791.html


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-16 Thread Ben Gamari
Kevin Charter kchar...@gmail.com writes:

snip
 For example, imagine you're new to the language, and as an exercise decide
 to write a program that counts the characters on standard input and writes
 the count to standard output. A naive program in, say, Python will probably
 use constant space and be fairly fast. A naive program in Haskell stands a
 good chance of having a space leak, building a long chain of thunks that
 isn't forced until it needs to write the final answer.  On small inputs,
 you won't notice. The nasty surprise comes when your co-worker says cool,
 let's run it on this 100 MB log file! and your program dies a horrible
 death. If your friend is a sceptic, she'll arch an eyebrow and secretly
 think your program -- and Haskell -- are a bit lame.

I, for one, can say that my initial impressions of Haskell were quite
scarred by exactly this issue. Being in experimental physics, I often
find myself writing data analysis code doing relatively simple
manipulations on large(ish) data sets. My first attempt at tackling a
problem in Haskell took nearly three days to get running with reasonable
performance. I can only thank the wonderful folks in #haskell profusely
for all of their help getting through that period. That being said, it
was quite frustrating at the time and I often wondered how I could
tackle a reasonably large project if I couldn't solve even a simple
problem with halfway decent performance. If it weren't for #haskell, I
probably would have given up on Haskell right then and there, much to my
deficit.

While things have gotten easier, even now, nearly a year after writing
my first line of Haskell, I still occassionally find a performance
issue^H^H^H^H surprise. I'm honestly not really sure what technical
measures could be taken to ease this learning curve. The amazing
community helps quite a bit but I feel that this may not be a
sustainable approach if Haskell gains greater acceptance. The profiler
is certainly useful (and much better with GHC 7.4), but there are still
cases where the performance hit incurred by the profiling
instrumentation precludes this route of investigation without time
consuming guessing at how to pare down my test case. It's certainly not
an easy problem.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Can Haskell outperform C++?

2012-05-16 Thread Ben Gamari
Yves Parès yves.pa...@gmail.com writes:

 The profiler is certainly useful (and much better with GHC 7.4)

 What are the improvements in that matter? (I just noticed that some GHC
 flags wrt profiling have been renamed)

The executive summary can be found in the release notes[1]. There was
also a talk I remember watching a while ago which gave a pretty nice
overview. I can't recall, but I might have been this[2]. Lastly,
profiling now works with multiple capabilities[3].

Cheers,

- Ben


[1] http://www.haskell.org/ghc/docs/7.4.1/html/users_guide/release-7-4-1.html
[2] http://www.youtube.com/watch?v=QBFtnkb2Erg
[3] https://plus.google.com/107890464054636586545/posts/hdJAVufhKrD


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Google Summer of Code - Lock-free data structures

2012-04-05 Thread Ben Gamari
Ben midfi...@gmail.com writes:

 perhaps it is too late to suggest things for GSOC --

 but stephen tetley on a different thread pointed at aaron turon's
 work, which there's a very interesting new concurrency framework he
 calls reagents which seems to give the best of all worlds : it is
 declarative and compositional like STM, but gives performance akin to
 hand-coded lock-free data structures.  he seems to have straddled the
 duality of isolation vs message-passing nicely, and can subsume things
 like actors and the join calculus.

 http://www.ccs.neu.edu/home/turon/reagents.pdf

 he has a BSD licensed library in scala at

 https://github.com/aturon/ChemistrySet

 if someone doesn't want to pick this up for GSOC i might have a hand
 at implementing it myself.

Keep use in the loop if you do. I have a very nice application that has
been needing a nicer approach to concurrency than IORefs but
really can't afford STM.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] using mutable data structures in pure functions

2012-03-11 Thread Ben Gamari
I'm sure others will want to chime in here, but I'll offer my two cents.

On Sun, 11 Mar 2012 22:38:56 -0500, E R pc88m...@gmail.com wrote:
snip
 
 So, again, what is the Haskell philosophy towards using mutable data
 structures in pure functions? Is it:
 
 1. leave it to the compiler to find these kinds of opportunities
 2. just use the immutable data structures - after all they are just as
 good (at least asymptotically)
 3. you don't want to use mutable data structures because of _
 4. it does happen, and some examples are __
 
You will find that a surprising amount of the time this will be
sufficient. After all, programmer time is frequently more expensive than
processor time.

That being said, there are some cases where you really do want to be
able to utilize a mutable data structure inside of an otherwise pure
algorithm. This is precisely the use of the ST monad. ST serves to allow
the use of mutable state inside of a function while hiding the fact from
the outside world.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage 2 maintainership

2012-02-14 Thread Ben Gamari
On Tue, 14 Feb 2012 02:59:27 -0800 (PST), Kirill Zaborsky qri...@gmail.com 
wrote:
 I apologize,
 But does hackage.haskell.org being down for some hours already has 
 something with the process of bringing up Hackage 2?
 
Nope, it will be some time before we are in a position to touch
hackage.haskell.org.

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage 2 maintainership

2012-02-14 Thread Ben Gamari
On Tue, 14 Feb 2012 02:06:16 +, Duncan Coutts 
duncan.cou...@googlemail.com wrote:
 On 14 February 2012 01:53, Duncan Coutts duncan.cou...@googlemail.com wrote:
  Hi Ben,
snip

 Ah, here's the link to my last go at getting people to self-organise.
 http://www.haskell.org/pipermail/cabal-devel/2011-October/007803.html
 
Excellent. I'll give it a read-through.

 You should find it somewhat useful. It gives an overview of people who
 are / have been involved.
 
It seems the first task will be to identify exactly what needs to be
done before we can begin the transition and record these tasks in a
single place. I don't have a particularly strong opinion concerning
where this should be (Trac, the Hackage wiki, the github issue tracker
that's been mentioned), but we should consolidate everything we have in
a single place.

 We did get another reasonable push at the time. In particular Max did
 a lot of good work. I'm not quite sure why it petered out again, I'd
 have to ask Max what went wrong, if it was my fault for letting things
 block on me or if it was just holidays/christmas. Maintaining momentum
 is hard.
 
This is quite true. I'll try to keep a constant push.

On another note, how did your full mirroring go last night?

Cheers,

- Ben

P.S. Duncan, sorry for the duplicate message.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hackage 2 maintainership

2012-02-13 Thread Ben Gamari
Hey all,

Those of you who follow the Haskell subreddit no doubt saw today's post
regarding the status of Hackage 2. As has been said many times in the
past, the primary blocker at this point to the adoption of Hackage 2
appears to be the lack of an administrator.

It seems to me this is a poor reason for this effort to be held
up. Having taken a bit of time to consider, I would be willing to put in
some effort to get things moving and would be willing to maintain the
haskell.org Hackage 2.0 instance going forward if necessary.

I currently have a running installation on my personal machine and
things seem to be working as they should. On the whole, installation was
quite trivial, so it seems likely that the project is indeed at a point
where it can take real use (although a logout option in the web
interface would make testing a bit easier).

That being said, it would in my opinion be silly to proceed without
fixing the Hackage trac. It was taken down earlier this year due to
spamming[1] and it seems the recovery project has been orphaned. I would
be willing to help with this effort, but it seems like the someone more
familiar with the haskel.org infrastructure might be better equipped to
handle the situation.

It seems that this process will go something like this,
  1) Bring Hackage trac back from the dead
  2) Bring up a Hackage 2 instance along-side the existing
 hackage.haskell.org
  3) Enlist testers
  4) Let things simmer for a few weeks/months ensuring nothing explodes
  5) After it's agreed that things are stable, eventually swap the
 Hackage 1 and 2 instances

This will surely be a non-trivial process but I would be willing to move
things forward.

Cheers,

- Ben


[1] http://www.haskell.org/pipermail/cabal-devel/2012-January/008427.html

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] SMP parallelism increasing GC time dramatically

2012-01-09 Thread Ben Gamari
On Mon, 9 Jan 2012 18:22:57 +0100, Mikolaj Konarski 
mikolaj.konar...@gmail.com wrote:
 Tom, thank you very much for the ThreadScope feedback.
 Anything new? Anybody? We are close to a new release,
 so that's the last call for bug reports before the release.
 Stay tuned. :)

As it turns out, I ran into a similar issue with a concurrent Gibbs
sampling implmentation I've been working on. Increasing -H fixed the
regression, as expected. I'd be happy to provide data if someone was
interested.

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] os.path.expanduser analogue

2011-11-20 Thread Ben Gamari
On the whole, the filepath package does an excellent job of providing
basic path manipulation tools, one weakness is the inability to resolve
~/... style POSIX paths. Python implements this with
os.path.expanduser. Perhaps a similar function might be helpful in
filepath?

Cheers,

- Ben

Possible (but untested) implementation
expandUser :: FilePath - IO FilePath
expandUser p = if ~/ `isPrefixOf` p
  then do u - getLoginName
  return $ u ++ drop 2 p
  else return p

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] os.path.expanduser analogue

2011-11-20 Thread Ben Gamari
On Sun, 20 Nov 2011 21:02:30 -0500, Brandon Allbery allber...@gmail.com wrote:
 On Sun, Nov 20, 2011 at 20:36, Ben Gamari bgamari.f...@gmail.com wrote:
[Snip]
 
 Although arguably there should be some error checking.
 
Thanks for the improved implementation. I should have re-read my code
before sending as it wasn't even close to correct.

Cheers,

- Ben


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] timezone-series, timezone-olson dependencies

2011-11-10 Thread Ben Gamari
Is there a reason why the current version of the timezone-series and
timezone-olson packages depend on time1.3? With time 1.4 being widely
used at this point this will cause conflicts with many packages yet my
tests show that both packages work fine with time 1.4. Could we have
this upper bound bumped to 1.5?

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] zlib build failure on recent GHC

2011-11-07 Thread Ben Gamari
With GHC 1ece7b27a11c6947f0ae3a11703e22b7065a6b6c zlib fails to build,
apparently due to Safe Haskell (bug 5610 [1]). The error is specifically,

$ cabal install zlib
Resolving dependencies...
Configuring zlib-0.5.3.1...
Preprocessing library zlib-0.5.3.1...
Building zlib-0.5.3.1...
[1 of 5] Compiling Codec.Compression.Zlib.Stream ( 
dist/build/Codec/Compression/Zlib/Stream.hs, 
dist/build/Codec/Compression/Zlib/Stream.o )

Codec/Compression/Zlib/Stream.hsc:857:1:
Unacceptable argument type in foreign declaration: CInt
When checking declaration:
  foreign import ccall unsafe static zlib.h inflateInit2_ c_inflateInit2_
:: StreamState - CInt - Ptr CChar - CInt - IO CInt

Codec/Compression/Zlib/Stream.hsc:857:1:
Unacceptable argument type in foreign declaration: CInt
When checking declaration:
  foreign import ccall unsafe static zlib.h inflateInit2_ c_inflateInit2_
:: StreamState - CInt - Ptr CChar - CInt - IO CInt

Codec/Compression/Zlib/Stream.hsc:857:1:
Unacceptable result type in foreign declaration: IO CInt
Safe Haskell is on, all FFI imports must be in the IO monad
When checking declaration:
  foreign import ccall unsafe static zlib.h inflateInit2_ c_inflateInit2_
:: StreamState - CInt - Ptr CChar - CInt - IO CInt
...

This is a little strange since,

  a) It's not clear why Safe Haskell is enabled
  b) The declarations in question seem to be valid

Does this seem like a compiler issue to you?

Cheers,

- Ben

[1] http://hackage.haskell.org/trac/ghc/ticket/5610

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Time zones and IO

2011-11-06 Thread Ben Gamari
Recently I've been working on an iCalendar parser implementation with
support for recurrence rules. For the recurrence logic, I've been
relying on Chris Heller's excellent time-recurrence
package. Unfortunately, it seems we have both encountered great
difficulty in cleanly handling time zones. For example, as seen in the
case of the CalendarTime LocalTime instance[1], Chris does the following,

 instance CalendarTimeConvertible LocalTime where
   toCalendarTime (LocalTime day t) = CalendarTime (fromEnum $ todSec t) 
(todMin t) (todHour t) d (toEnum m) y weekDay yearDay tz
 where
   (y, m, d, weekDay, yearDay) = dayInfo day
   tz = unsafePerformIO getCurrentTimeZone

Ouch. Unfortunately, I have been entirely unable to come up with any
better way to deal with timezones in a referentially transparent
way. Passing the current timezone as an argument seems heavy-handed and
reveals the leakiness of the CalendarTimeConvertible abstraction. Short
of making toCalendarTime an action, can anyone see a suitable way of
refactoring CalendarTimeConvertible to avoid unsafe IO?

Perhaps a TimeZoneInfo monad is in order, exposing lookup of arbitrary
(through timezone-olson) as well the current timezone? It seems like
this would be inconvenient and overkill, however.

Ideas?

- Ben


[1] 
https://github.com/hellertime/time-recurrence/blob/master/src/Data/Time/CalendarTime/CalendarTime.hs#L92

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec line number off-by-one

2011-09-21 Thread Ben Gamari
On Wed, 21 Sep 2011 11:27:31 +0200, Christian Maeder christian.mae...@dfki.de 
wrote:
 Hi,
 
 1. your lookAhead is unnecessary, because your items (atomNames) never 
 start with %.
 
I see.

 2. your try fails in (line 12, column 1), because the last item (aka 
 atomName) starts consuming \n, before your eol parser is called.
 
Ahh, this is a good point. I for some reason seeded the thought in my
mind that spaces takes the ' ' character, not '\n'.

 So rather than calling spaces before every real atom, I would call it 
 after every real atom and after your formatDecl (so before your linesOf 
 parser).
 
Excellent solution. I appreciate your help. That would have taken me
quite a bit of head-banging to find.

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec line number off-by-one

2011-09-21 Thread Ben Gamari
On Wed, 21 Sep 2011 09:37:40 +0300, Roman Cheplyaka r...@ro-che.info wrote:
 Hi Ben,
 
 This is indeed a bug in parsec.
 
Ahh good. I'm glad I'm not crazy. Given that it seems the lookahead is
actually unnecessary, looks like I can skip the patch too. Thanks for
your reply!

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsec line number off-by-one

2011-09-20 Thread Ben Gamari
Recently I've been playing around with Parsec for a simple parsing
project. While I was able to quickly construct my grammar (simplified
version attached), getting it working has been a bit tricky. In
particular, I am now stuck trying to figure out why Parsec is
mis-reporting line numbers. Parsec seems convinced that line 12 of my
input (also attached) has a % character,

  $ runghc Test.hs
  Left (unknown) (line 12, column 1):
  unexpected %
  expecting space or atom name

while my file clearly disagrees,

  10  %FLAG ATOM_NAME   
  
  11  %FORMAT(20a4) 
  
  12  C1  H1  C2  H2  C3  H3  C4  H4  C5  C6  C7  C8  N1  C9  H9  C10 H10 C11 
H11 C12 
  13  H12 C13 H13 C14 C15 N2  C16 C17 C29 H18 C19 H19 C20 H20 C21 H21 C22 
H221H222H223
  ...
  18  %FLAG CHARGE
  19  %FORMAT(5E16.8)   
  

The task here is to identify the block of data lines (lines 12-17),
ending at the beginning of the next block (starting with %). It seems
likely that my problem stems from the fact that I use try to
accomplish this but this is as far as I can reason.

Any ideas what might cause this sort of off-by-one? Does anyone see a
better (i.e. working) way to formulate my grammar? Any and all help
would be greatly appreciated. Thanks.

Cheers,

- Ben


module Main(main) where

import Debug.Trace
import Data.Maybe
import Text.ParserCombinators.Parsec
import Text.Parsec.Prim (ParsecT)
import qualified Text.ParserCombinators.Parsec.Token as P
import Text.ParserCombinators.Parsec.Language (emptyDef)

data PrmTopBlock = AtomNames [String]
 deriving (Show, Eq)

countBetween m n p = do xs - count m p
ys - count (n - m) $ option Nothing $ do
y - p
return (Just y)
return (xs ++ catMaybes ys)

restLine = do a - many (noneOf \n)
  eol
  return a

eol = do skipMany $ char ' '
 char '\n'

natural = P.integer $ P.makeTokenParser emptyDef

float = do sign - option 1 (do s - oneOf +- 
return $ if s == '-' then -1 else 1)
   x - P.float $ P.makeTokenParser emptyDef
   return $ sign * x

flagDecl x = do string $ %FLAG  ++ x
eol

formatDecl = do string %FORMAT(
count - many1 digit
format - letter
length - many1 digit
char ')'
eol
return (count, format, length)

-- |Multiple lines of lists of a given item
linesOf item = do ls - many1 $ try (do lookAhead (noneOf %)
l - many1 item
eol
return $ trace (show l) l)
  return $ concat ls

atomNameBlock = do flagDecl ATOM_NAME
   formatDecl
   atomNames - linesOf atomName
   return $ AtomNames atomNames
where
atomName = do spaces
  name - countBetween 1 4 (alphaNum | oneOf \'+-) ? atom name
  return name

ignoredBlock = do string %FLAG ? ignored block flag
  restLine
  formatDecl
  skipMany (noneOf %  restLine)

hello = do ignoredBlock
   ignoredBlock
   atomNameBlock

parsePrmTopFile input = parse hello (unknown) input

test = do a - readFile test.prmtop
  print $ parsePrmTopFile a

main = test


test.prmtop
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Bug in GC's ordering of ForeignPtr finalization?

2011-08-28 Thread Ben Gamari
On Tue, 16 Aug 2011 12:32:13 -0400, Ben Gamari bgamari.f...@gmail.com wrote:
 It seems that the notmuch-haskell bindings (version 0.2.2 built against
 notmuch from git master; passes notmuch-test) aren't dealing with memory
 management properly. In particular, the attached test code[1] causes
 talloc to abort.  Unfortunately, while the issue is consistently
 reproducible, it only occurs with some queries (see source[1]). I have
 been unable to establish the exact criterion for failure.
 
 It seems that the crash is caused by an invalid access to a freed Query
 object while freeing a Messages object (see Valgrind trace[3]). I've
 taken a brief look at the bindings themselves but, being only minimally
 familiar with the FFI, there's nothing obviously wrong (the finalizers
 passed to newForeignPtr look sane). I was under the impression that
 talloc was reference counted, so the Query object shouldn't have been
 freed unless if there was still a Messages object holding a
 reference. Any idea what might have gone wrong here?  Thanks!
 
After looking into this issue in a bit more depth, I'm even more
confused. In fact, I would not be surprised if I have stumbled into a
bug in the GC. It seems that the notmuch-haskell bindings follow the
example of the python bindings in that child objects keep references to
their parents to prevent the garbage collector from releasing the
parent, which would in turn cause talloc to free the child objects,
resulting in odd behavior when the child objects were next accessed. For
instance, the Query and Messages objects are defined as follows,

type MessagesPtr = ForeignPtr S__notmuch_messages
type MessagePtr = ForeignPtr S__notmuch_message
newtype Query = Query (ForeignPtr S__notmuch_query)
data MessagesRef = QueryMessages { qmpp :: Query, msp :: MessagesPtr }
 | ThreadMessages { tmpp :: Thread, msp :: MessagesPtr }
 | MessageMessages { mmspp :: Message, msp :: MessagesPtr }
data Message = MessagesMessage { msmpp :: MessagesRef, mp :: MessagePtr }
 | Message { mp :: MessagePtr }
type Messages = [Message]

As seen in the Valgrind dump given in my previous message, it seems that
the Query object is being freed before the Messages object. Since the
Messages object is a child of the Query object, this fails.

In my case, I'm calling queryMessages which begins by issuing a given
notmuch Query, resulting in a MessagesPtr. This is then packaged into a
QueryMessages object which is then passed off to
unpackMessages. unpackMessages iterates over this collection, creating
MessagesMessage objects which themselves refer to the QueryMessages
object. Finally, these MessagesMessage objects are packed into a list,
resulting in a Messages object. Thus we have the following chain of
references,

MessagesMessage
  |   
  |  msmpp
  \/
QueryMessages
  |
  |  qmpp
  \/
Query

As we can see, each MessagesMessage object in the Messages list
resulting from queryMessages holds a reference to the Query object from
which it originated. For this reason, I fail to see how it is possible
that the RTS would attempt to free the Query before freeing the
MessagesPtr. Did I miss something in my analysis? Are there tools for
debugging issues such as this? Perhaps this is a bug in the GC?

Any help at all would be greatly appreciated.

Cheers,

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Bug in GC's ordering of ForeignPtr finalization?

2011-08-28 Thread Ben Gamari
On Sun, 28 Aug 2011 22:26:05 -0500, Antoine Latter aslat...@gmail.com wrote:
 One problem you might be running in to is that the optimization passes
 can notice that a function isn't using all of its arguments, and then
 it won't pass them. These even applies if the arguments are bound
 together in a record type.
 
In this case I wouldn't be able to reproduce the problem with
optimization disabled, no? Unfortunately, this is not the case; the
problem persists even with -O0.

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Memory management issue in notmuch-haskell bindings

2011-08-16 Thread Ben Gamari
It seems that the notmuch-haskell bindings (version 0.2.2 built against
notmuch from git master; passes notmuch-test) aren't dealing with memory
management properly. In particular, the attached test code[1] causes
talloc to abort.  Unfortunately, while the issue is consistently
reproducible, it only occurs with some queries (see source[1]). I have
been unable to establish the exact criterion for failure.

It seems that the crash is caused by an invalid access to a freed Query
object while freeing a Messages object (see Valgrind trace[3]). I've
taken a brief look at the bindings themselves but, being only minimally
familiar with the FFI, there's nothing obviously wrong (the finalizers
passed to newForeignPtr look sane). I was under the impression that
talloc was reference counted, so the Query object shouldn't have been
freed unless if there was still a Messages object holding a
reference. Any idea what might have gone wrong here?  Thanks!

Cheers,

- Ben



[1] Test case,

import Data.List
import Control.Monad
import System.Environment
import Foreign.Notmuch

dbpath = /home/ben/.mail

getAddresses :: Database - String - IO [String]
getAddresses db q = do
query - queryCreate db q
msgs - queryMessages query
addrs - mapM (flip messageGetHeader $ From) msgs
return addrs

main = do
db - databaseOpen dbpath DatabaseModeReadOnly
--addrs2 - getAddresses db tag:haskell -- This succeeds
addrs3 - getAddresses db to:dietz -- This fails

--print addrs2
--print addrs3

databaseClose db



[2] Crashed session and backtrace,

[1217 ben@ben-laptop ~] $ ghc test.hs -auto-all -rtsopts -prof  gdb ./test 
GNU gdb (Ubuntu/Linaro 7.2-1ubuntu11) 7.2
Copyright (C) 2010 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type show copying
and show warranty for details.
This GDB was configured as x86_64-linux-gnu.
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /home/ben/test...(no debugging symbols found)...done.
(gdb) run
Starting program: /home/ben/test 
[Thread debugging using libthread_db enabled]

Program received signal SIGABRT, Aborted.
0x75979d05 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
64  ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
in ../nptl/sysdeps/unix/sysv/linux/raise.c
(gdb) bt
#0  0x75979d05 in raise (sig=6) at 
../nptl/sysdeps/unix/sysv/linux/raise.c:64
#1  0x7597dab6 in abort () at abort.c:92
#2  0x76de038c in talloc_abort (reason=0x76de56e8 Bad talloc magic 
value - access after free) at ../talloc.c:210
#3  0x76de0271 in talloc_abort_access_after_free (ptr=0x769190, 
location=0x77bd4e04 lib/messages.c:142) at ../talloc.c:229
#4  talloc_chunk_from_ptr (ptr=0x769190, location=0x77bd4e04 
lib/messages.c:142) at ../talloc.c:250
#5  _talloc_free (ptr=0x769190, location=0x77bd4e04 lib/messages.c:142) 
at ../talloc.c:1164
#6  0x77bc7e65 in notmuch_messages_destroy (messages=0x769190) at 
lib/messages.c:142
#7  0x004de1c9 in scheduleFinalizers ()
#8  0x004e013d in GarbageCollect ()
#9  0x004d9e40 in scheduleDoGC.clone.18 ()
#10 0x004db0e0 in exitScheduler ()
#11 0x004d9066 in hs_exit_ ()
#12 0x004d940a in shutdownHaskellAndExit ()
#13 0x004d8a91 in real_main ()
#14 0x004d8ade in hs_main ()
#15 0x75964eff in __libc_start_main (main=0x408ed0 main, argc=1, 
ubp_av=0x7fffe4f8, init=value optimized out, fini=value optimized out, 
rtld_fini=value optimized out, stack_end=0x7fffe4e8) at 
libc-start.c:226
#16 0x00407791 in _start ()
(gdb) 


[3] Valgrind output,

==25241== Memcheck, a memory error detector
==25241== Copyright (C) 2002-2010, and GNU GPL'd, by Julian Seward et al.
==25241== Using Valgrind-3.6.1 and LibVEX; rerun with -h for copyright info
==25241== Command: ./test
==25241== 
==25241== Conditional jump or move depends on uninitialised value(s)
==25241==at 0x52BB510: inflateReset2 (in 
/lib/x86_64-linux-gnu/libz.so.1.2.3.4)
==25241==by 0x52BB605: inflateInit2_ (in 
/lib/x86_64-linux-gnu/libz.so.1.2.3.4)
==25241==by 0x5F211BE: ChertTable::lazy_alloc_inflate_zstream() const 
(chert_table.cc:1672)
==25241==by 0x5F23B06: ChertTable::read_tag(Cursor*, std::string*, bool) 
const (chert_table.cc:1264)
==25241==by 0x5F260F9: ChertTable::get_exact_entry(std::string const, 
std::string) const (chert_table.cc:1210)
==25241==by 0x5F26DE2: 
ChertTermList::ChertTermList(Xapian::Internal::RefCntPtrChertDatabase const, 
unsigned int) (chert_termlist.cc:44)
==25241==by 0x5EFF2E5: ChertDatabase::open_term_list(unsigned int) const 
(chert_database.cc:891)
==25241==by 0x5E7E7FB: 

Re: [Haskell-cafe] Fwd: shootout

2011-08-03 Thread Ben Gamari
On Sun, 31 Jul 2011 02:27:07 +0200, Thorsten Hater t...@tp1.rub.de wrote:
Non-text part: multipart/mixed
 Good Evening,
 
 can anybody confirm that this implementation is somewhat faster
 than the current benchmark (at expense of memory consumption)?
 
 Cheers, Thorsten
 
Somewhat faster is an understatement I would say:

$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.0.3
$ ghc -O2 -threaded -rtsopts fasta-old.hs -XBangPatterns
[1 of 1] Compiling Main ( fasta-old.hs, fasta-old.o )
Linking fasta-old ...
$ ghc -O2 -threaded -rtsopts fasta-new.hs
[1 of 1] Compiling Main ( fasta-new.hs, fasta-new.o )
Linking fasta-new ...
$ time ./fasta-old +RTS -N2 -RTS 2500 | old

real0m21.173s
user0m18.380s
sys 0m0.910s
$ time ./fasta-new +RTS -N2 -RTS 2500 | new

real0m4.809s
user0m2.190s
sys 0m0.730s
$ diff -q old new
$ 
$ time ./fasta-old +RTS -N1 -RTS 2500 | old

real0m19.069s
user0m16.670s
sys 0m0.630s
$ time ./fasta-new +RTS -N1 -RTS 2500 | new

real0m3.797s
user0m1.500s
sys 0m0.600s
$ diff -q old new
$ 

This is on a dual-core Core 2 running at 2.1GHz. I'm honestly not sure
why performance doesn't improve with two threads, but I think I've made
the point.

- Ben

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe