Re: [Haskell-cafe] How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Erik Hesselink
On Wed, Feb 10, 2010 at 16:59, Jason Dusek jason.du...@gmail.com wrote:
  I wonder how many people actually write Haskell,
  principally or exclusively, at work?

We (typLAB) use Haskell. There's four of us, but only two actually
program Haskell, and not exclusively. We also use Javascript in the
browser (though we use functional programming techniques there as
well).

Erik
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Dungeons of Wor - a largish FRP example and a fun game, all in one!

2010-02-11 Thread Patai Gergely
Hello all,

I just uploaded the first public version of Dungeons of Wor [1], a
homage to the renowned three-decade-old arcade game, Wizard of Wor.
While it makes a fine time killer if you have a few minutes to spare, it
might be of special interest to the lost souls who are trying to figure
out FRP. The game was programmed using the Simple version of the
experimental branch of Elerea [2], which provides first-class discrete
streams to describe time-varying quantities, and the main game logic is
described as a composition of streams instead of a world state
transformer. Developing in this manner was an interesting experience,
and I'll write about it in more detail over the weekend.

All the best,

Gergely

[1] http://hackage.haskell.org/package/dow
[2]
http://hackage.haskell.org/packages/archive/elerea/1.2.3/doc/html/FRP-Elerea-Experimental-Simple.html

-- 
http://www.fastmail.fm - The professional email service

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Seen on reddit: or, foldl and foldr considered slightly harmful

2010-02-11 Thread Johann Höchtl
In a presentation of Guy Steele for ICFP 2009 in Edinburgh:
http://www.vimeo.com/6624203
he considers foldl and foldr harmful as they hinder parallelism
because of Process first element, then the rest Instead he proposes
a divide and merge aproach, especially in the light of going parallel.

The slides at
http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.sun.com%2Fprojects%2Fplrg%2FPublications%2FICFPAugust2009Steele.pdf
[Bware: Google docs]
are somewhat  geared towards Fortress, but I wonder what Haskellers
have to say about his position.

Greetings,

   Johann
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Michael Oswald
On 02/10/2010 04:59 PM, Jason Dusek wrote:
   I wonder how many people actually write Haskell,
   principally or exclusively, at work?

Well, my main language at work in the moment is C++, we also use Java, a
lot of Tcl and Python.

I use Haskell for my own programs and test utilities / converters. The
biggest achievement at work was an Installer program, which was quite
complicated and had to be safe and of course we had time pressure, so I
quickly coded it in Haskell. It is now used in the installation
procedure of a part from a big mission control system for satellites.


lg,
Michael

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HDBC convert [SqlValue] without muchos boilerplate

2010-02-11 Thread Iain Barnett
Hi,

I'm trying to get to grips with HDBC and have the following problem. When I run 
a query that returns a result set, each row comes back as a [SqlValue]. 
Naively, I thought the following function would convert a [SqlValue] into a 
string, but instead I get the error below. 

convrow2 :: [SqlValue] - String
convrow2 (x:xs) = foldl (\i j - i ++  |  ++ show j ) (show (fromSql x)) xs


Prelude :l TasksSimple.lhs 
[1 of 1] Compiling Main ( TasksSimple.lhs, interpreted )

TasksSimple.lhs:126:65:
No instance for (convertible-1.0.5:Data.Convertible.Base.Convertible
   SqlValue a)
  arising from a use of `fromSql' at TasksSimple.lhs:126:65-73
Possible fix:
  add an instance declaration for
  (convertible-1.0.5:Data.Convertible.Base.Convertible SqlValue a)
In the first argument of `show', namely `(fromSql x)'
In the second argument of `foldl', namely `(show (fromSql x))'
In the expression:
foldl (\ i j - i ++  |  ++ show j) (show (fromSql x)) xs
Failed, modules loaded: none.


I tried looking at how to add an instance declaration for convertible, but was 
stumped.


This code, however, works in GHCi. Would anyone know how to convert from 
[SqlValue] in a straightforward way without having to specify every field by 
hand ? I don't fancy doing this for each sql statement I need to run.

convrow1 :: [SqlValue] - String
convrow1 [tasksid,title,added] = 
show ( (fromSql 
tasksid)::Integer ) 
++  |  ++ 
fromSql title
++  |  ++
show ((fromSql 
added)::LocalTime)


Any help is much appreciated, especially as I haven't looked at any Haskell in 
a while and wasn't any good with it before!

Regards,
Iain

This is my set up:
GHC is 6.10.4
HDBC is HDBC-2.1.1, HDBC-2.2.2, HDBC-postgresql-2.1.0.0,HDBC-postgresql-2.2.0.0
Convertible is convertible-1.0.5, convertible-1.0.8
OSX 10.6___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seen on reddit: or, foldl and foldr considered slightly harmful

2010-02-11 Thread Ketil Malde
Johann Höchtl johann.hoec...@gmail.com writes:

 In a presentation of Guy Steele for ICFP 2009 in Edinburgh:
 http://www.vimeo.com/6624203
 he considers foldl and foldr harmful as they hinder parallelism
 because of Process first element, then the rest Instead he proposes
 a divide and merge aproach, especially in the light of going parallel.

In Haskell foldl/foldr apply to linked lists (or lazy streams, if you
prefer) which are already inherently sequential, and gets a rather harsh
treatment.  I notice he points to finger trees, which I though was
implemented in Data.Sequence.

 are somewhat  geared towards Fortress, but I wonder what Haskellers
 have to say about his position.

Can we (easily) parallelize operations on Data.Sequence?  Often, the
devil is in the details, and there's a lot of ground to cover between
'trees are easier to parallelize' to an efficient and effective high
level interface.  (E.g. non-strict semantics allowing speculative
evalutaion - you still need to insert manual `par`s, right?)

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seen on reddit: or, foldl and foldr considered slightly harmful

2010-02-11 Thread Thomas Girod
Isn't it the kind of things Data Parallel Haskell is achieving ? I'm in
no way an expert of the field, but from what I've read on the subject it
looked like :

I have a list of N elements and I want to map the function F on it.
technically, I could spawn N processes and build the result from that,
but it would be highly inefficient. So the really hard part is to guess
how I should split my data to get the best performances.

Well, I guess it's pretty easy for a flat structure if you have access
to it's length, but for a recursive one it is complicated as you don't
know if a branch of the tree will lead to a leaf or a huge subtree ...
the evil detail !

Tom


On Thu, Feb 11, 2010 at 11:00:51AM +0100, Ketil Malde wrote:
 Johann Höchtl johann.hoec...@gmail.com writes:
 
  In a presentation of Guy Steele for ICFP 2009 in Edinburgh:
  http://www.vimeo.com/6624203
  he considers foldl and foldr harmful as they hinder parallelism
  because of Process first element, then the rest Instead he proposes
  a divide and merge aproach, especially in the light of going parallel.
 
 In Haskell foldl/foldr apply to linked lists (or lazy streams, if you
 prefer) which are already inherently sequential, and gets a rather harsh
 treatment.  I notice he points to finger trees, which I though was
 implemented in Data.Sequence.
 
  are somewhat  geared towards Fortress, but I wonder what Haskellers
  have to say about his position.
 
 Can we (easily) parallelize operations on Data.Sequence?  Often, the
 devil is in the details, and there's a lot of ground to cover between
 'trees are easier to parallelize' to an efficient and effective high
 level interface.  (E.g. non-strict semantics allowing speculative
 evalutaion - you still need to insert manual `par`s, right?)
 
 -k
 -- 
 If I haven't seen further, it is by standing in the footprints of giants
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HDBC convert [SqlValue] without muchos boilerplate

2010-02-11 Thread Miguel Mitrofanov

The problem is, fromSql x doesn't know that type it should return. It's sure that it 
has to be of class Convertible SqlValue, but nothing more. Could be String, or Int32, 
or something else.

What if you just omit the show function? fromSql seems to be able to convert 
almost anything to String.

Iain Barnett wrote:

Hi,

I'm trying to get to grips with HDBC and have the following problem. When I run a query that returns a result set, each row comes back as a [SqlValue]. Naively, I thought the following function would convert a [SqlValue] into a string, but instead I get the error below. 


convrow2 :: [SqlValue] - String 
convrow2 (x:xs) = foldl (\i j - i ++  |  ++ show j ) (show (fromSql x)) xs


Prelude :l TasksSimple.lhs 
[1 of 1] Compiling Main ( TasksSimple.lhs, interpreted )


TasksSimple.lhs:126:65:
No instance for (convertible-1.0.5:Data.Convertible.Base.Convertible
   SqlValue a)
  arising from a use of `fromSql' at TasksSimple.lhs:126:65-73
Possible fix:
  add an instance declaration for
  (convertible-1.0.5:Data.Convertible.Base.Convertible SqlValue a)
In the first argument of `show', namely `(fromSql x)'
In the second argument of `foldl', namely `(show (fromSql x))'
In the expression:
foldl (\ i j - i ++  |  ++ show j) (show (fromSql x)) xs
Failed, modules loaded: none.


I tried looking at how to add an instance declaration for convertible, but was 
stumped.


This code, however, works in GHCi. Would anyone know how to convert from [SqlValue] in a 
straightforward way without having to specify every field by hand ? I don't 
fancy doing this for each sql statement I need to run.

convrow1 :: [SqlValue] - String 
convrow1 [tasksid,title,added] = 
			show ( (fromSql tasksid)::Integer )
			++  |  ++ 
			fromSql title

++  |  ++
show ((fromSql 
added)::LocalTime)


Any help is much appreciated, especially as I haven't looked at any Haskell in 
a while and wasn't any good with it before!

Regards,
Iain

This is my set up:
GHC is 6.10.4
HDBC is HDBC-2.1.1, HDBC-2.2.2, HDBC-postgresql-2.1.0.0,HDBC-postgresql-2.2.0.0
Convertible is convertible-1.0.5, convertible-1.0.8
OSX 10.6___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14

2010-02-11 Thread Maciej Piechotka
On Tue, 2010-02-09 at 16:41 +, John Lato wrote:
 
 See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a valid
 Stream instance using iteratee.  Also Gregory Collins recently posted
 an iteratee wrapper for Attoparsec to haskell-cafe.  To my knowledge
 these are not yet in any packages, but hackage is vast. 

Hmm. Am I correct that his implementation caches everything? 
I tried to rewrite the implementation using... well imperative linked
list. For trivial benchmark it have large improvement (althought it may
be due to error in test such as using ByteString) and, I believe, that
it allows to free memory before finish.

Results of test on Core 2 Duo 2.8 GHz:
10: 0.000455s   0.000181s
100:0.000669s   0.001104s
1000:   0.005209s   0.023704s
1:  0.053292s   1.423443s
10: 0.508093s   132.208597s

After I broke the running as it was taking too long.

Is my implementation correct (when I try to write less trivial benchmark
I probably find out but I hope for comment on the idea).

Regards
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE FlexibleContexts #-}
{-# LANGUAGE FlexibleInstances #-}
{-# LANGUAGE MultiParamTypeClasses #-}
import Control.Applicative
import Control.Concurrent.MVar
import Data.Maybe
import Control.Monad
import Control.Monad.ST
import Data.Iteratee
import Data.Iteratee.Base.StreamChunk (StreamChunk)
import Data.Iteratee.WrappedByteString
import Data.IORef
import qualified Data.ListLike as LL
import Data.STRef
import Data.Time
import Data.Word
import System.Mem.Weak
import Text.Parsec
import Text.Parsec.Pos
import ParsecIteratee

class Monad m = Reference r m where
  newRef :: a - m (r a)
  readRef :: r a - m a
  writeRef :: r a - a - m ()
  modifyRef :: r a - (a - m (a, b)) - m b
  modifyRef r f = readRef r = f = \(a, b) - writeRef r a  return b

instance Reference IORef IO where
  newRef = newIORef
  readRef = readIORef
  writeRef = writeIORef

instance Reference (STRef s) (ST s) where
  newRef = newSTRef
  readRef = readSTRef
  writeRef = writeSTRef

instance Reference MVar IO where
  newRef = newMVar
  readRef = readMVar
  writeRef = putMVar

data (Monad m, Reference r m, StreamChunk c el) = NextCursor r m c el =
  NextCursor (Cursor r m c el) | None | Uneval

data (Monad m, Reference r m, StreamChunk c el) = Cursor r m c el = 
  Cursor (r (NextCursor r m c el)) (c el)

mkCursor :: (Monad m, Reference r m, StreamChunk c el) = m (Cursor r m c el)
mkCursor = newRef Uneval = \r - (return $! Cursor r LL.empty)

instance (Monad m, Reference r m, StreamChunk c el) =
 Stream (Cursor r m c el) (IterateeG c el m) el where
  uncons = unconsStream

unconsStream :: (Monad m, Reference r m, StreamChunk c el)
 = Cursor r m c el
 - IterateeG c el m (Maybe (el, Cursor r m c el))
unconsStream p@(Cursor r c)
| LL.null c = IterateeG $ \st - join $ modifyRef r $ unconsCursor st p
| otherwise = return $! justUnconsCursor p

unconsCursor :: forall r m c el. (Monad m, Reference r m, StreamChunk c el)
 = StreamG c el
 - Cursor r m c el
 - NextCursor r m c el
 - m (NextCursor r m c el,
   m (IterGV c el m (Maybe (el, Cursor r m c el
unconsCursor st_ rv@(NextCursor p@(Cursor r c))
  | LL.null c = return $! (rv, join $ modifyRef r $ unconsCursor st p)
  | otherwise = return $! (rv, return $! Done (justUnconsCursor p) st)
unconsCursor st_ rv@(None)
  = return $! (rv, return $! Done Nothing st)
unconsCursor st@(Chunk c)  p rv@(Uneval)
  | LL.null c = return $! (rv, return $! Cont (unconsStream p) Nothing)
  | otherwise = do r - newRef Uneval :: m (r (NextCursor r m c el))
   let p' = Cursor r c
   ra = Done (justUnconsCursor p') (Chunk LL.empty)
   return $! (NextCursor p', return $! ra)
unconsCursor st@(EOF Nothing)  _ rv@(Uneval)
  = return $! (None, return $! Done Nothing st)
unconsCursor st@(EOF (Just e)) _ rv@(Uneval)
  = return $! (rv, return $! Cont (throwErr e) (Just e))

justUnconsCursor :: (Monad m, Reference r m, StreamChunk c el) =
Cursor r m c el - Maybe (el, Cursor r m c el)
justUnconsCursor (Cursor r c) = Just $! (LL.head c, Cursor r $ LL.tail c)

benchmarkParser :: (Stream s m Char) = Int - ParsecT s () m ()
benchmarkParser i | i   5 = try $ sequence_ $ replicate i $ char '\0'
  | i = 5 = (try $ sequence_ $ replicate 5 $ char '\0') 
 benchmarkParser (i - 5)

mkBenchmark :: Int - IO (NominalDiffTime, NominalDiffTime)
mkBenchmark i = do start - getCurrentTime
   c - mkCursor :: IO (Cursor IORef IO WrappedByteString Char)
   let bp :: ParsecT (Cursor IORef IO WrappedByteString Char)
 ()
 (IterateeG WrappedByteString Char IO)
 

Re: [Haskell-cafe] Using Cabal during development

2010-02-11 Thread Limestraël

Eventually, I think using cabal during development may be convenient. The
only drawback is that you have to specify each dependency and -- above all
-- every module each time you add one.
Nevertheless, I'm not convinced regarding the use of Makefiles with Cabal. I
happen to think it's a bit outsize.
A shell script is enough.
By the way, I've found another way to develop simultaneously a (or many)
library(ies) and an executable.
It would be to use a local ghc package database.

In my project directory, I do:
ghc-pky init pkg.conf.d

It create a directory pkg.conf.d which will contain my local database.

Then all the libs must be configured with:
cabal configure --package-db pkg.conf.d
(or 'runhaskell Setup.hs configure --package-db pkg.conf.d' if you don't use
cabal-install)
Then build normally ('cabal build')
Then, the little trouble is that you have to register you newly-built
manually with a:
cabal register --inplace
(Anyone knows how to tell cabal to register automatically to the local pkg
database?)

Then, to compile you executable with ghc (because Cabal is definitely not
convient when you have a lib and an executable in the same package):
ghc --make --package-conf pkg.conf.d main.hs

Again, should you have better/simpler ways to achieve this, I would be glad
to know them.


Simon Michael wrote:
 
 Another great thread. I'm another who uses both make and cabal. I try to
 automate a lot of things and find a makefile 
 easier for quick scripting. Perhaps at some point I'll get by with just
 cabal. Here's an example:
 
 http://joyful.com/repos/hledger/Makefile
 
 An unusual feature, I think, is the use of the little-known sp tool for
 auto-recompiling (see ci rule). Typically I 
 leave make ci running in an emacs shell window, where I can watch the
 errors as I edit and save source. I don't have 
 clickable errors currently, I get by with linum-mode. When I need to
 explore I'll run ghci in another shell window. 
 After reading this thread, I'm going to try using C-c C-l more.
 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 

-- 
View this message in context: 
http://old.nabble.com/Using-Cabal-during-development-tp27515446p27544307.html
Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Using Cabal during development

2010-02-11 Thread Maurí­cio CA

 Eventually, I think using cabal during development may be
 convenient. The only drawback is that you have to specify each
 dependency and -- above all -- every module each time you add
 one.

When writing bindings-posix, bindings-glib etc., which have lots
of modules, I used a shell script to take all modules under ./src
to .cabal. What it did was: 1) for each subdirectory, create a .hs
file that reimport all modules under that subdirectory; 2) list
all .hs (and, in my case, also .hsc) under ./src and insert then
into .cabal.

I've found that scripts of this kind have been very usefull. They
require you to follow some rules (like, say, all modules with name
mapping to directories are always a reimport of submodules).

 Then, to compile you executable with ghc (because Cabal is
 definitely not convient when you have a lib and an executable in
 the same package): ghc --make --package-conf pkg.conf.d main.hs

 Again, should you have better/simpler ways to achieve this, I
 would be glad to know them.

I usually find useful, at first, to build only the executable and
leave the library. When modules get stable enough, I separate
both. Other scripts have been also usefull: one to check
uncommited changes in all packages I'm working in, other to sync
all my local packages with their repos. You could have a single
'b' script you could run like:

b l # build and install your library in local database
b t # build and run your test package executable
b m # update module list in .cabal file for all your packages
etc.

Best,

Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seen on reddit: or, foldl and foldr considered slightly harmful

2010-02-11 Thread Ross Paterson
On Thu, Feb 11, 2010 at 11:00:51AM +0100, Ketil Malde wrote:
 Johann Höchtl johann.hoec...@gmail.com writes:
  In a presentation of Guy Steele for ICFP 2009 in Edinburgh:
  http://www.vimeo.com/6624203
  he considers foldl and foldr harmful as they hinder parallelism
  because of Process first element, then the rest Instead he proposes
  a divide and merge aproach, especially in the light of going parallel.
 
 In Haskell foldl/foldr apply to linked lists (or lazy streams, if you
 prefer) which are already inherently sequential, and gets a rather harsh
 treatment.  I notice he points to finger trees, which I though was
 implemented in Data.Sequence.

Direct URL for the slides:
http://research.sun.com/projects/plrg/Publications/ICFPAugust2009Steele.pdf

As he says, associativity is the key to parallelism -- an old observation,
but still underappreciated.  Even without parallelism, associativity also
gives us greater freedom in structuring our solutions.  The moral is that
our datatypes need associative binary operations more than asymmetric ones.
We use lists too much (because they're so convenient) and apart from the
loss of parallelism it constrains our thinking to the sequential style
criticised by Backus back in 1978.

Regarding finger trees, he's just referring to the idea of caching the
results of monoidal folds in nodes of a tree.  That's crucial to the
applications of finger trees, but it can be applied to any kind of tree.
As he mentions, it's related to the Ladner-Fischer parallel prefix
algorithm, which has an upward pass accumulating sums for each subtree
followed by a downward accumulation passing each sum into the subtree
to the right.   But it's not just for parallelism: when you have these
cached values in a balanced tree, you can compute the sum of any prefix
in O(log n) steps.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] If monads are single/linearly threaded, doesn't that reduce parallelism?

2010-02-11 Thread Matthias Görgens
Perhaps if you search for Abelian Monad or so, you will find
interesting things in the category theory literature.  Some of them
may be transplantable to Haskell --- but you probably don't want a
completely commutative structure.  Arrows seem to express the
dependencies between operations more fine-grained than the sequencing
that Monads require.  (I meant to look into arrows for ages..)

Matthias.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Using Cabal during development

2010-02-11 Thread MightyByte
On Thu, Feb 11, 2010 at 5:28 AM, Limestraël limestr...@gmail.com wrote:
 Eventually, I think using cabal during development may be convenient. The
 only drawback is that you have to specify each dependency...

I actually think this is a benefit, not a drawback.  In one of my
projects where I used makefiles, I was depending on a variety of
hackage projects.  I was using development code from one of these that
had not yet made it onto hackage.  There was a period of time where I
didn't do much development, and when I came back, I updated to the
most recent development code and discovered that it broke my build.
Since I didn't have all my dependencies specified and didn't know
which devel version I had been using, I was unable to build the
project until I fixed all the places where the update to the other
package broke my code.  If you are on a tight time schedule, this
could be very problematic, whereas if you had specified the
dependencies, cabal would easily be able to get the right versions for
you.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Praki Prakash
I am working on an analytics server with a web front end. Being a
personal endeavor at this time, I can choose any language that I
fancy. I love Haskell and have achieved a modicum of proficiency with
many years of following along. I spent a few weeks of serious Haskell
prototyping and came to the realization that Haskell has a very steep
learning curve to become truly proficient in it. The basics are easy,
the various typeclasses can be understood with some study. But, there
are thousands of packages on Hackage and not much documentation on
most them. Another issue for me is the lack of a cohesive
infrastructure for working with web services.

Now my work has shifted to Clojure. I like it so far but I miss the
elegance of Haskell. Whether Haskell becomes an easy choice for
commercial work or remains a boutique language depends on how easy it
is to build today's applications.

But, I still love Haskell :)
Praki


On Thu, Feb 11, 2010 at 1:39 AM, Michael Oswald muell...@gmx.net wrote:
 On 02/10/2010 04:59 PM, Jason Dusek wrote:
   I wonder how many people actually write Haskell,
   principally or exclusively, at work?

 Well, my main language at work in the moment is C++, we also use Java, a
 lot of Tcl and Python.

 I use Haskell for my own programs and test utilities / converters. The
 biggest achievement at work was an Installer program, which was quite
 complicated and had to be safe and of course we had time pressure, so I
 quickly coded it in Haskell. It is now used in the installation
 procedure of a part from a big mission control system for satellites.


 lg,
 Michael

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
http://www.google.com/profiles/praki.prakash
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Michael Lesniak
Hello,


 elegance of Haskell. Whether Haskell becomes an easy choice for
 commercial work or remains a boutique language depends on how easy it
 is to build today's applications.

Do you (or anyone reading this thread) know of some kind of wishlist
of missing features and/or libraries? Would be nice to see what's
still missing.

- Michael
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seen on reddit: or, foldl and foldr considered slightly harmful

2010-02-11 Thread Jan-Willem Maessen

On Feb 11, 2010, at 3:41 AM, Johann Höchtl wrote:

 In a presentation of Guy Steele for ICFP 2009 in Edinburgh:
 http://www.vimeo.com/6624203
 he considers foldl and foldr harmful as they hinder parallelism
 because of Process first element, then the rest Instead he proposes
 a divide and merge aproach, especially in the light of going parallel.
 
 The slides at
 http://docs.google.com/viewer?url=http%3A%2F%2Fresearch.sun.com%2Fprojects%2Fplrg%2FPublications%2FICFPAugust2009Steele.pdf
 [Bware: Google docs]

There's no need to use Google docs.  A direct url for the pdf:

http://research.sun.com/projects/plrg/Publications/ICFPAugust2009Steele.pdf

I recently gave a followup talk at Portland State, arguing that notation 
matters, and that even with better notation programmer mindset is also going to 
be hard to change:

http://research.sun.com/projects/plrg/Publications/PSUJan2010-Maessen.pdf

The key thing here isn't *just* the handedness of lists, but the handedness of 
foldl/foldr *irrespective of the underlying data structure*.  So switching to 
tree-structured data a la fingertrees is a necessary step, but not a sufficient 
one.  The use of monoidal reductions has always been an important part of 
parallel programming.

 are somewhat  geared towards Fortress, but I wonder what Haskellers
 have to say about his position.

Now, what if list comprehensions were really shorthand for construction of 
Applicative or Monoid structures by traversing a mixture of data types with a 
common interface (something like this one)?

class Generator t e | t - e
mapReduce :: (Monoid m) = t - (e - m) - m


-Jan-Willem Maessen
 Another Fortress/Haskell crossover

 Greetings,
 
   Johann
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Van Enk
I need to be able to swap out the RTS. The place I want to stick Haskell
absolutely needs its own custom RTS, and currently, I don't think it's all
that easy or clean to do that.

Am I wrong? Are there resources describing how to do this already?

/jve

On Thu, Feb 11, 2010 at 10:12 AM, Michael Lesniak mlesn...@uni-kassel.dewrote:

 Hello,


  elegance of Haskell. Whether Haskell becomes an easy choice for
  commercial work or remains a boutique language depends on how easy it
  is to build today's applications.

 Do you (or anyone reading this thread) know of some kind of wishlist
 of missing features and/or libraries? Would be nice to see what's
 still missing.

 - Michael
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14

2010-02-11 Thread Gregory Collins
Maciej Piechotka uzytkown...@gmail.com writes:

 On Tue, 2010-02-09 at 16:41 +, John Lato wrote:
 
 See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a valid
 Stream instance using iteratee.  Also Gregory Collins recently posted
 an iteratee wrapper for Attoparsec to haskell-cafe.  To my knowledge
 these are not yet in any packages, but hackage is vast. 

 Hmm. Am I correct that his implementation caches everything? 

The one that John posted (iteratees on top of parsec) has to keep a copy
of the entire input, because parsec wants to be able to do arbitrary
backtracking on the stream.

Attoparsec provides an *incremental* parser. You feed it bite-sized
chunks of an input stream, and it either says ok, I'm done, here's your
value, and the rest of the stream I didn't use or I couldn't finish,
here's a parser continuation you can feed more chunks to. This, of
course, is a perfect conceptual match for iteratees -- with a little bit
of plumbing you should be able to parse a stream in O(1) space.


 I tried to rewrite the implementation using... well imperative linked
 list. For trivial benchmark it have large improvement (althought it may
 be due to error in test such as using ByteString) and, I believe, that
 it allows to free memory before finish.

 Results of test on Core 2 Duo 2.8 GHz:
 10:   0.000455s   0.000181s
 100:  0.000669s   0.001104s
 1000: 0.005209s   0.023704s
 1:0.053292s   1.423443s
 10:   0.508093s   132.208597s

Which column corresponds to which module here, and which module are you
benchmarking against, John's or mine?

G
-- 
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Ivan Panachev
On Thu, Feb 11, 2010 at 6:30 PM, John Van Enk vane...@gmail.com wrote:

 I need to be able to swap out the RTS. The place I want to stick Haskell
 absolutely needs its own custom RTS, and currently, I don't think it's all
 that easy or clean to do that.

 Am I wrong? Are there resources describing how to do this already?


Could you be a bit more precise about your RTS needs? For example, if you
want to run Haskell on raw hardware you might be interested with House
project, http://programatica.cs.pdx.edu/House/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Van Enk
I'm not specifically interested in raw hardware, but I am interested in,
say, making the garbage collection deterministic and altering the scheduler
to fit some other needs. I'll try and find a link to the paper describing
the GC i want to implement

On Thu, Feb 11, 2010 at 11:10 AM, Ivan Panachev ivan.panac...@gmail.comwrote:

 On Thu, Feb 11, 2010 at 6:30 PM, John Van Enk vane...@gmail.com wrote:

 I need to be able to swap out the RTS. The place I want to stick Haskell
 absolutely needs its own custom RTS, and currently, I don't think it's all
 that easy or clean to do that.

 Am I wrong? Are there resources describing how to do this already?


 Could you be a bit more precise about your RTS needs? For example, if you
 want to run Haskell on raw hardware you might be interested with House
 project, http://programatica.cs.pdx.edu/House/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell and the Job Market, e.g. with Google

2010-02-11 Thread Hans van Thiel
Hello,
Somewhat in response to the original post about Haskell engineers I, II
and III. This confirms the remark that Haskell experience is now being
appreciated, though not (yet) used (very much). Steven Grant, recruiter
from Google, asked me to bring to his attention anyone who might be
suitable, so that's what I'm doing.

start quote
We are currently aggressively recruiting for a large number of engineers
in EMEA. I spotted your extensive open source experience and was
particularily interested to see you have worked with Haskell. I am not
looking for a Haskell developer but more interested in people that have
worked in exotic languages such as Haskell or Erlang or Scheme.

The roles we have are heavily open sourced based and will be mainly
working with Python, C, Linux, shell etc and are based in Dublin, London
or Zurich.

If you have any interest in discussing these further, drop me an email
to stevengr...@google.com and we can discuss.
end quote

From a second email:
start quote
The job specs are below.

http://www.google.ie/support/jobs/bin/answer.py?answer=34884
http://www.google.ie/support/jobs/bin/answer.py?answer=34883

The roles are within a very specialist team within Google.
They are a hybrid type role and are responsible for making our
products reliable scalable and more efficient.
end quote

Get in touch with Steven:

Steven Grant

European IT Staffing
Phone: +353 1 543 5083
Google Ireland Ltd., Barrow Street, Dublin 4, Ireland
Registered in Dublin, Ireland
Registration Number: 368047

I think this is interesting even to those who are not looking for a job
right now, since it shows the current mind-set regarding Haskell, at a
major and leading IT company. 

Best Regards,

Hans van Thiel

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Van Enk
Here's the paper:
http://comjnl.oxfordjournals.org/cgi/content/abstract/33/5/466

On Thu, Feb 11, 2010 at 11:45 AM, John Van Enk vane...@gmail.com wrote:

 I'm not specifically interested in raw hardware, but I am interested in,
 say, making the garbage collection deterministic and altering the scheduler
 to fit some other needs. I'll try and find a link to the paper describing
 the GC i want to implement

 On Thu, Feb 11, 2010 at 11:10 AM, Ivan Panachev 
 ivan.panac...@gmail.comwrote:

 On Thu, Feb 11, 2010 at 6:30 PM, John Van Enk vane...@gmail.com wrote:

 I need to be able to swap out the RTS. The place I want to stick Haskell
 absolutely needs its own custom RTS, and currently, I don't think it's all
 that easy or clean to do that.

 Am I wrong? Are there resources describing how to do this already?


 Could you be a bit more precise about your RTS needs? For example, if you
 want to run Haskell on raw hardware you might be interested with House
 project, http://programatica.cs.pdx.edu/House/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Jason Dusek
  Is JHC not suitable in this case? It won't compile all of
  Haskell but it does some to be doing the right things as
  regards a pluggable RTS.

  I think it's fair to say at this point that GHC can compile
  all the Haskell we want and that new Haskell pieces will come
  to GHC before anything else gets them. So going with a totally
  new system, front-to-back, is not really desirable when all
  you want is a new RTS; however, I don't think GHC was designed
  to be a Haskell compiler superserver.

--
Jason Dusek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Streamed creation of STArray from list?

2010-02-11 Thread Jason Dagit
Hello,

I was wondering if there is a trick for generating a new STArray from a list
in such a way that you do not have to hold both the list and array in
memory?

http://www.haskell.org/ghc/docs/latest/html/libraries/array-0.3.0.0/Data-Array-MArray.html

As far as I can tell, that is the interface for creating ST Arrays.  For
example, newListArray looks almost perfect:

*newListArray* ::
(MArrayhttp://www.haskell.org/ghc/docs/latest/html/libraries/array-0.3.0.0/Data-Array-MArray.html#t%3AMArraya
e m,
Ixhttp://www.haskell.org/ghc/docs/latest/html/libraries/base-4.2.0.0/Data-Ix.html#t%3AIxi)
= (i, i) - [e] - m (a i e)

If I know the length of the list, I might expect newListArray to have the
memory behavior I want.  In my case, the code calls (length xs) to calculate
the length of the list.  As I understand it, that will force the spine of
the list into memory.  Can this be avoided?  The only trick that comes into
mind is the one used by the C++ vector class where you dynamically resize
the array and copy the values to the new array.  That's potentially a lot of
copying, right?

Is some other array better suited for this, such as vector or uvector?

Thanks,
Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Van Enk
Well, my point here is that if we want to see GHC branch into other fields
(mine being safety critical), and actually see the code generated by GHC be
what's really running (rather than once-removed in the form of an EDSL),
some changes will have to be made.

Being able to experiment with GHC's RTS and possibly being able to write
your own (should the project require it) would go a long way to helping me
make the case for GHC in safety critical.

Perhaps I'd be better off looking at UHC/LHC/JHC as a starting place.

/jve

On Thu, Feb 11, 2010 at 12:13 PM, Jason Dusek jason.du...@gmail.com wrote:

  Is JHC not suitable in this case? It won't compile all of
  Haskell but it does some to be doing the right things as
  regards a pluggable RTS.

  I think it's fair to say at this point that GHC can compile
  all the Haskell we want and that new Haskell pieces will come
  to GHC before anything else gets them. So going with a totally
  new system, front-to-back, is not really desirable when all
  you want is a new RTS; however, I don't think GHC was designed
  to be a Haskell compiler superserver.

 --
 Jason Dusek

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Job Vranish
Anyone know of a type inference utility that can run right on haskell-src
types? or one that could be easily adapted?
I want to be able to pass in an HsExp and get back an HsQualType. It doesn't
have to be fancy, plain Haskell98 types would do.

It wouldn't be to hard to make one myself, but I figured there might be one
floating around already and it'd be a shame to write it twice :)

Thanks,

- Job
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Stephen Tetley
Hello Job

For Haskell 98 would the code from 'Typing Haskell in Haskell' paper suffice?

A web search should find the code...

Best wishes

Stephen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Stephen Tetley
http://web.cecs.pdx.edu/~mpj/thih/

Looks like its a type _checker_ though...


On 11 February 2010 17:39, Stephen Tetley stephen.tet...@gmail.com wrote:
 Hello Job

 For Haskell 98 would the code from 'Typing Haskell in Haskell' paper suffice?

 A web search should find the code...

 Best wishes

 Stephen

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell and the Job Market, e.g. with Google

2010-02-11 Thread Gwern Branwen
On Thu, Feb 11, 2010 at 11:49 AM, Hans van Thiel hthiel.c...@zonnet.nl wrote:
 Hello,
 Somewhat in response to the original post about Haskell engineers I, II
 and III. This confirms the remark that Haskell experience is now being
 appreciated, though not (yet) used (very much). Steven Grant, recruiter
 from Google, asked me to bring to his attention anyone who might be
 suitable, so that's what I'm doing.

 start quote
 We are currently aggressively recruiting for a large number of engineers
 in EMEA. I spotted your extensive open source experience and was
 particularily interested to see you have worked with Haskell. I am not
 looking for a Haskell developer but more interested in people that have
 worked in exotic languages such as Haskell or Erlang or Scheme.

 The roles we have are heavily open sourced based and will be mainly
 working with Python, C, Linux, shell etc and are based in Dublin, London
 or Zurich.

 If you have any interest in discussing these further, drop me an email
 to stevengr...@google.com and we can discuss.
 end quote

 From a second email:
 start quote
 The job specs are below.

 http://www.google.ie/support/jobs/bin/answer.py?answer=34884
 http://www.google.ie/support/jobs/bin/answer.py?answer=34883

 The roles are within a very specialist team within Google.
 They are a hybrid type role and are responsible for making our
 products reliable scalable and more efficient.
 end quote

 Get in touch with Steven:

 Steven Grant

 European IT Staffing
 Phone: +353 1 543 5083
 Google Ireland Ltd., Barrow Street, Dublin 4, Ireland
 Registered in Dublin, Ireland
 Registration Number: 368047

 I think this is interesting even to those who are not looking for a job
 right now, since it shows the current mind-set regarding Haskell, at a
 major and leading IT company.

 Best Regards,

 Hans van Thiel

I would be far from the first to remark that the 'Python Paradox'
(http://www.paulgraham.com/pypar.html) has moved on and become the
Scala/Haskell Paradox.

-- 
gwern
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread stefan kersten
On 10.02.10 19:03, Bryan O'Sullivan wrote:
 I'm thinking of switching the statistics library over to using vector.

that would be even better of course! an O(0) solution, at least for me ;) let me
know if i can be of any help (e.g. in testing). i suppose uvector-algorithms
would also need to be ported to vector, then.

 uvector is pretty bit-rotted in comparison to vector at this point, and
 it's really seeing no development, while vector is The Shiny Future.
 Roman, would you call the vector library good enough to use in
 production at the moment?

i've been using the library for wavelet transforms, matching pursuits and the
like, and while my implementations are not heavily optimized, they perform
reasonably well (no benchmarking done yet, though). the key arguments for using
vector instead of uvector were the cleaner interface and Data.Vector.Storable
for interfacing with foreign libraries (such as fftw, through the fft package).

sk
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Lennart Augustsson
To do anything interesting you also to process modules, something
which I hope to contribute soon to haskell-src-exts.


On Thu, Feb 11, 2010 at 6:35 PM, Job Vranish job.vran...@gmail.com wrote:
 Anyone know of a type inference utility that can run right on haskell-src
 types? or one that could be easily adapted?
 I want to be able to pass in an HsExp and get back an HsQualType. It doesn't
 have to be fancy, plain Haskell98 types would do.

 It wouldn't be to hard to make one myself, but I figured there might be one
 floating around already and it'd be a shame to write it twice :)

 Thanks,

 - Job

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Lennart Augustsson
It does type inference, it's just not engineered to be part of a real compiler.

On Thu, Feb 11, 2010 at 6:41 PM, Stephen Tetley
stephen.tet...@gmail.com wrote:
 http://web.cecs.pdx.edu/~mpj/thih/

 Looks like its a type _checker_ though...


 On 11 February 2010 17:39, Stephen Tetley stephen.tet...@gmail.com wrote:
 Hello Job

 For Haskell 98 would the code from 'Typing Haskell in Haskell' paper suffice?

 A web search should find the code...

 Best wishes

 Stephen

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Henning Thielemann
stefan kersten schrieb:

 uvector is pretty bit-rotted in comparison to vector at this point, and
 it's really seeing no development, while vector is The Shiny Future.
 Roman, would you call the vector library good enough to use in
 production at the moment?
 
 i've been using the library for wavelet transforms, matching pursuits and the
 like,

Nice I have also worked on this topics, even with Haskell. However, at
that time I used plain lists.

 and while my implementations are not heavily optimized, they perform
 reasonably well (no benchmarking done yet, though). the key arguments for 
 using
 vector instead of uvector were the cleaner interface and Data.Vector.Storable
 for interfacing with foreign libraries (such as fftw, through the fft 
 package).

Btw. Data.StorableVector can also be used for this interfacing, and I
would be very interested in an interface to FFTW. Actually, I have
already used FFTW on StorableVector

http://code.haskell.org/~thielema/morbus-meniere/src/StorableVectorCArray.hs

There is also Data.StorableVector.Lazy which is nice for processing
stream data.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Henning Thielemann
John Van Enk schrieb:
 I need to be able to swap out the RTS. The place I want to stick Haskell
 absolutely needs its own custom RTS, and currently, I don't think it's
 all that easy or clean to do that.
 
 Am I wrong? Are there resources describing how to do this already?

As far as I know JHC is intended to work without an RTS.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hackage download statistis

2010-02-11 Thread Maurí­cio CA

Hi, all,

Some time ago a download statistic of hackage was made available,
and analysed in a few ways. Googling for it still find that at a
Galois web page.

I though that, since the tools to get that were there, this would
be output once in a while, but it seems it hasn't been done since
then.

Does anyone know if there are plans about that?

Thanks,

Maurício

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Meacham
On Thu, Feb 11, 2010 at 06:57:48PM +0100, Henning Thielemann wrote:
 John Van Enk schrieb:
  I need to be able to swap out the RTS. The place I want to stick Haskell
  absolutely needs its own custom RTS, and currently, I don't think it's
  all that easy or clean to do that.
  
  Am I wrong? Are there resources describing how to do this already?
 
 As far as I know JHC is intended to work without an RTS.

It is more that the RTS is generated as a part of the normal code
generation process, this is done by implementing as much as possible in
haskell itself, jhc has a very rich set of unboxed primitives, making it
as expressible as c-- for the most part, for the bits of C I do need, I
try to make them conditionally compilable, so parts that arn't used will
not be included. all in all, the overhead is ~= 1k or so. A side effect
is that jhc is very lightly coupled to any particular RTS, so
experimenting with alternate ones is pretty straigtforward.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Matthias Görgens
Implementing an alternative RTS for GHC seems like a viable Google
Summer of Code project to me.  What do you think?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Jason Dagit
On Thu, Feb 11, 2010 at 9:35 AM, Job Vranish job.vran...@gmail.com wrote:

 Anyone know of a type inference utility that can run right on haskell-src
 types? or one that could be easily adapted?
 I want to be able to pass in an HsExp and get back an HsQualType. It
 doesn't have to be fancy, plain Haskell98 types would do.

 It wouldn't be to hard to make one myself, but I figured there might be one
 floating around already and it'd be a shame to write it twice :)


I've never checked to know if this is true, but could you use the GHC API to
have GHC do your type inference/checking?
http://www.haskell.org/ghc/docs/latest/html/libraries/ghc-6.12.1/index.html

If you figure out if this is possible (or not), I'd love to hear what you
figure out.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Streamed creation of STArray from list?

2010-02-11 Thread Ryan Ingram
On Thu, Feb 11, 2010 at 9:23 AM, Jason Dagit da...@codersbase.com wrote:
 If I know the length of the list, I might expect newListArray to have the
 memory behavior I want.  In my case, the code calls (length xs) to calculate
 the length of the list.  As I understand it, that will force the spine of
 the list into memory.  Can this be avoided?  The only trick that comes into
 mind is the one used by the C++ vector class where you dynamically resize
 the array and copy the values to the new array.  That's potentially a lot of
 copying, right?

Not really: as long as you grow the array exponentially the total
number of copies is linear in the size of the array.  On average you
will copy each element twice; the first element gets copied (log n)
times but the last half of the elements go directly into the final
array.

If you want to avoid putting any of the spine into memory you might do
a third copy where you shrink the array size at the end (or perhaps
there is an array type that has a 'shrink' operation that just reduces
the bounds without freeing any memory)

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: sendfile leaking descriptors on Linux?

2010-02-11 Thread Jeremy Shaw
On Wed, Feb 10, 2010 at 1:15 PM, Bardur Arantsson s...@scientician.netwrote:

I've also been contemplating some solutions, but I cannot see any solutions
 to this problem which could reasonably be implemented outside of GHC itself.
 GHC lacks a threadWaitError, so there's no way to detect the problem
 except by timeout or polling. Solutions involving timeouts and polling are
 bad in this case because they arbitrarily restrict the client connection
 rate.

 Cheers,


I believe solutions involving polling and timeouts may be the *only*
solution due to the way TCP works. There are two cases to consider here:

 1. what happens when the remote client does a proper disconnect by sending
a FIN packet, etc
 2. what happens when the remote client just drops the connection

Case #1 - Proper Disconnect

I believe that in case we are ok. select() may not wakeup due to the socket
being closed -- but something will eventually cause select() to wakeup, and
then next time through the loop, the call to select will fail with EBADF.
This will cause everyone to wakeup. We can test this case by writing a
client that purposely (and correctly) terminations the connection while
threadWaitWrite is blocking and see if that causes it to wakeup. To ensure
that the IOManager is eventually waking up, the server can have an IO thread
that just does, forever $ threadDelay (1*10^6)

Look here for more details:
http://darcs.haskell.org/packages/base/GHC/Conc.lhs

Case #2 - Sudden Death

In this case, there is no way to tell if the client is still there with out
trying to send / recv data. A TCP connection is not a 'tangible' link. It is
just an agreement to send packets to/from certain ports with certain
sequence numbers. It's much closer to snail mail than a telephone call.

If you set the keepalive socket option, then the TCP layer will
automatically ping the connection to make sure it is still alive. However, I
believe the default time between keepalive packets is 2 hours, and can only
be changed on a system wide basis?

http://www.unixguide.net/network/socketfaq/2.8.shtml

The other option is to try to send some data. There are at least two cases
that can happen here.

 1. the network cable is unplugged -- this is not an 'error'. The write
buffer will fill up and it will wait until it can send the data. If the
write buffer is full, it will either block or return EAGAIN depending on the
mode. Eventually, after 2 hours, it might give up.

 2. the remote client has terminated the connection as far as it is
concerned but not notified the server -- when you try to send data it will
reject it, and send/write/sendfile/etc will raise sigPIPE.

Looking at your debug output, we are seeing the sigPIPE / Broken Pipe error
most of the time. But then there is the case where we get stuck on the
threadWaitWrite.

threadWaitWrite is ultimately implemented by passing the file descriptor to
the list of write descriptors in a call to select(). It seems, however, that
select() is not waking up just because calling write() on a file descriptor
*would* cause sigPIPE.

The easiest way to confirm this case is probably to write a small, pure C
program and see what really happens.

If this is the case, then it means the only way to tell if the client has
abruptly dropped the connection is to actually try sending the data and see
if the sending function calls sigPIPE. And that means doing some sort of
polling/timeout?

What do you think?

I do not have a good explanation as to why the portable version does not
fail. Except maybe it is just so slow that it does not ever fill up the
buffer, and hence does not get stuck in threadWaitWrite?

Any way, the fundamental question is:

 When your write buffer is full, and you call select() on that file
descriptor, will select() return in the case where calling write() again
would raise sigPIPE?

- jeremy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14

2010-02-11 Thread Maciej Piechotka
On Thu, 2010-02-11 at 11:00 -0500, Gregory Collins wrote:
 Maciej Piechotka uzytkown...@gmail.com writes:
 
  On Tue, 2010-02-09 at 16:41 +, John Lato wrote:
  
  See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a valid
  Stream instance using iteratee.  Also Gregory Collins recently posted
  an iteratee wrapper for Attoparsec to haskell-cafe.  To my knowledge
  these are not yet in any packages, but hackage is vast. 
 
  Hmm. Am I correct that his implementation caches everything? 
 
 The one that John posted (iteratees on top of parsec) has to keep a copy
 of the entire input, because parsec wants to be able to do arbitrary
 backtracking on the stream.
 

Well. Not quite. AFAIU (and ByteString implementation indicate so) the
uncons have a type
uncons :: s - m (Maybe (t, s))

Where s indicates the position on the stream. Since it is impossible to
get back from having s alone the GC should be free to finalize all
memory allocated to cache the stream before the first living s.

I.e. if input is:

text = 'L':'o':'r':'e':'m':' ':'i':'p':'s':'u':'m':[]
^   ^
s1  s2

and s1 and s2 are position in the stream (for stream that is list)
GC can free Lor part. It seems that it might be significant in real live
as try calls are relatively short comparing with rest of code.

By keeping s as 'pointer to' element second uncons have O(1) time
instead of O(n).

 
  I tried to rewrite the implementation using... well imperative linked
  list. For trivial benchmark it have large improvement (althought it may
  be due to error in test such as using ByteString) and, I believe, that
  it allows to free memory before finish.
 
  Results of test on Core 2 Duo 2.8 GHz:
  10: 0.000455s   0.000181s
  100:0.000669s   0.001104s
  1000:   0.005209s   0.023704s
  1:  0.053292s   1.423443s
  10: 0.508093s   132.208597s
 
 Which column corresponds to which module here, and which module are you
 benchmarking against, John's or mine?
 
 G

As I'm implementing for parsec (I don't know attoparsec) [as a kind of
exercise to get iteratee better] I benchmarked against John's. My
results are on the left.

I forgot to compile and optimize. Here's result for ByteString:
MineJohn's
10: 0.000425s   0.000215s
100:0.000616s   0.001963s
1000:   0.0041s 0.048359s
1:  0.041694s   4.492774s
10: 0.309289s   434.238449s

And []:
MineJohn's
10: 0.000605s   0.000932s
100:0.001464s   0.008101s
1000:   0.004036s   0.054125s
1:  0.032341s   1.36938s
10: 0.317859s   115.846891s

Regards and sorry for confusion 

PS. Sorry - I know that test is somehow simplistic but I thing it
emulates real live situation where you are interested in small amount of
elements around current position (short try). I also think
that /dev/zero have relatively predictable access time (it does not need
to be loaded from slow disk first after which it can be accessed from
cache, it does not run out of entropy etc.).


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14

2010-02-11 Thread Antoine Latter
On Thu, Feb 11, 2010 at 1:27 PM, Maciej Piechotka uzytkown...@gmail.com wrote:
 On Thu, 2010-02-11 at 11:00 -0500, Gregory Collins wrote:
 Maciej Piechotka uzytkown...@gmail.com writes:

  On Tue, 2010-02-09 at 16:41 +, John Lato wrote:
 
  See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a valid
  Stream instance using iteratee.  Also Gregory Collins recently posted
  an iteratee wrapper for Attoparsec to haskell-cafe.  To my knowledge
  these are not yet in any packages, but hackage is vast.
 
  Hmm. Am I correct that his implementation caches everything?

 The one that John posted (iteratees on top of parsec) has to keep a copy
 of the entire input, because parsec wants to be able to do arbitrary
 backtracking on the stream.


 Well. Not quite. AFAIU (and ByteString implementation indicate so) the
 uncons have a type
    uncons :: s - m (Maybe (t, s))

 Where s indicates the position on the stream. Since it is impossible to
 get back from having s alone the GC should be free to finalize all
 memory allocated to cache the stream before the first living s.


I'm not sure that this is correct -  parsec believes that it is free
to call 'uncons' multiple times on the same value and receive an
equivalent answer.

Maybe I'm misunderstanding what we're talking about, but a simple test is:

backtrackTest = (try (string aardvark)) | (string aaple)

And then attempt to parse the stream equivalent to aaple with 'backtrackTest'.

This should be a successful parse (untested).

Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread stefan kersten
On 11.02.10 18:55, Henning Thielemann wrote:
 i've been using the library for wavelet transforms, matching pursuits
 and the like,

 Nice I have also worked on this topics, even with Haskell. However, at
 that time I used plain lists.

interesting! was performance acceptable for practical work? at the moment i'm
not too concerned about performance -- the base line maybe could be to be
competitive with matlab. in the long run i hope i'll be able to scale my stuff
to larger amounts of data, however ...

 and while my implementations are not heavily optimized, they perform
 reasonably well (no benchmarking done yet, though). the key arguments
 for using vector instead of uvector were the cleaner interface and
 Data.Vector.Storable for interfacing with foreign libraries (such as
 fftw, through the fft package).

 Btw. Data.StorableVector can also be used for this interfacing, and
 I would be very interested in an interface to FFTW. Actually, I have
 already used FFTW on StorableVector

i'm simply using the fft package and adapted some of it's internals to work on
Data.Vector.Storable; nothing fancy though, and only for RC and CR transforms.
let me know if you're interested in the code ...

 There is also Data.StorableVector.Lazy which is nice for processing
 stream data.

yes, i know about storablevector, but i already had some code using uvector,
so in the end vector was the easier upgrade. to me the relative merits of
storablevector vs. vector are still unclear; the lazy interface could be
implemented on top of vector as well, i suppose?

sk
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Iteratee and parsec (was: Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14)

2010-02-11 Thread Maciej Piechotka
On Thu, 2010-02-11 at 13:34 -0600, Antoine Latter wrote:
 On Thu, Feb 11, 2010 at 1:27 PM, Maciej Piechotka
 uzytkown...@gmail.com wrote:
  On Thu, 2010-02-11 at 11:00 -0500, Gregory Collins wrote:
  Maciej Piechotka uzytkown...@gmail.com writes:
 
   On Tue, 2010-02-09 at 16:41 +, John Lato wrote:
  
   See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a
 valid
   Stream instance using iteratee.  Also Gregory Collins recently
 posted
   an iteratee wrapper for Attoparsec to haskell-cafe.  To my
 knowledge
   these are not yet in any packages, but hackage is vast.
  
   Hmm. Am I correct that his implementation caches everything?
 
  The one that John posted (iteratees on top of parsec) has to keep a
 copy
  of the entire input, because parsec wants to be able to do
 arbitrary
  backtracking on the stream.
 
 
  Well. Not quite. AFAIU (and ByteString implementation indicate so)
 the
  uncons have a type
 uncons :: s - m (Maybe (t, s))
 
  Where s indicates the position on the stream. Since it is impossible
 to
  get back from having s alone the GC should be free to finalize all
  memory allocated to cache the stream before the first living s.
 
 
 I'm not sure that this is correct -  parsec believes that it is free
 to call 'uncons' multiple times on the same value and receive an
 equivalent answer. 


That's what I meant. But it have to keep the reference to the first
element.

Consider example with list:



text = 'L':'o':'r':'e':'m':' ':'i':'p':'s':'u':'m':[]
^   ^  ^
s1  s2 s3

uncons s1 == Identity (Just ('e', 'm':' ':'i':'p':'s':'u':'m':[]))
uncons s2 == Identity (Just ('p', 's':'u':'m':[]))
uncons s3 == Identity Nothing

However we will never get (nor we keep reference to):
'L':'o':'r':'e':'m':' ':'i':'p':'s':'u':'m':[],
'o':'r':'e':'m':' ':'i':'p':'s':'u':'m':[]
'r':'e':'m':' ':'i':'p':'s':'u':'m':[]
so those values can be freed as they are before first pointer (namely
s1).

Regards


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: sendfile leaking descriptors on Linux?

2010-02-11 Thread Bardur Arantsson

Jeremy Shaw wrote:

On Wed, Feb 10, 2010 at 1:15 PM, Bardur Arantsson s...@scientician.netwrote:

I've also been contemplating some solutions, but I cannot see any solutions

to this problem which could reasonably be implemented outside of GHC itself.
GHC lacks a threadWaitError, so there's no way to detect the problem
except by timeout or polling. Solutions involving timeouts and polling are
bad in this case because they arbitrarily restrict the client connection
rate.

Cheers,



I believe solutions involving polling and timeouts may be the *only*
solution due to the way TCP works. There are two cases to consider here:



True, but my point was rather that a solution in the sendfile libary 
would incur an _extra_ timeout on top of the timeout which is handled by 
the OS. It's very hard to come up with a proper timeout here because 
apps will have different requirements depending on the expected 
connection rate, etc. This is what I see as unacceptable since it would 
have to be a completely arbitrary timeout -- there's no way for the 
application to specify a timeout to the sendfile library since the API 
doesn't permit it.


[--snip--]

Case #1 - Proper Disconnect

I believe that in case we are ok. select() may not wakeup due to the socket
being closed -- but something will eventually cause select() to wakeup, and
then next time through the loop, the call to select will fail with EBADF.
This will cause everyone to wakeup. We can test this case by writing a
client that purposely (and correctly) terminations the connection while
threadWaitWrite is blocking and see if that causes it to wakeup. To ensure
that the IOManager is eventually waking up, the server can have an IO thread
that just does, forever $ threadDelay (1*10^6)

Look here for more details:
http://darcs.haskell.org/packages/base/GHC/Conc.lhs



I don't have time to write a C test program right now. I'm actually not 
100% convinced that this case is *not* problematic, but my limited 
testing with well-behaved clients (wget, curl) hasn't turned up any 
problems so far.



Case #2 - Sudden Death

In this case, there is no way to tell if the client is still there with out
trying to send / recv data. A TCP connection is not a 'tangible' link. It is
just an agreement to send packets to/from certain ports with certain
sequence numbers. It's much closer to snail mail than a telephone call.

If you set the keepalive socket option, then the TCP layer will
automatically ping the connection to make sure it is still alive. However, I
believe the default time between keepalive packets is 2 hours, and can only
be changed on a system wide basis?

http://www.unixguide.net/network/socketfaq/2.8.shtml


There are some options you can set via setsockopt(), see man 7 tcp:

   tcp_keepalive_intvl(default: 75s)
   tcp_fin_timeout(default: 60s)

(The latter is the amount of time to wait for the final FIN before 
forcing a the socket to close.)


These can be set per-socket.



The other option is to try to send some data. There are at least two cases
that can happen here.


This is what I tried. The trouble here is that you have to force the 
thread doing threadWaitWrite to wake up periodically... and how do you 
decide how often? Too often and you're burning CPU doing nothing, too 
seldom and you're letting threads (and by implication 
used-but-really-disconnected-as-far-as-the-OS-is-concerned file 
descriptors) pile up. The overhead of mempcy (avoidance of which is 
sendfile's raison-d'être) is probably much less than the overhead of 
doing all this administration in userspace instead of just letting the 
kernel do its thing.


Even waking up very seldom (~1/s IIRC) incurred a lot of CPU overhead in 
my test case... but I suppose I could give it another try to see if I'd 
made some mistake in my code which caused it to use more CPU than necessary.




 1. the network cable is unplugged -- this is not an 'error'. The write
buffer will fill up and it will wait until it can send the data. If the
write buffer is full, it will either block or return EAGAIN depending on the
mode. Eventually, after 2 hours, it might give up.


I believe the socket is actually in non-blocking mode in my application. 
 I'm not putting it into non-blocking mode, so I'm guessing that the 
accept call is doing that -- or maybe it's just the default behavior 
of accept() on Linux. Converting a socket to a Handle (which is what the 
portable sendfile does) automatically puts it into blocking mode.


Actually, I think this whole issue could be avoided if the socket could 
just be forced into blocking mode. In that case, there would be no need 
to call threadWaitWrite: The native sendfile() call could never return 
EAGAIN (it would block instead), and so there'd be no need to call 
threadWaitWrite to avoid busy-waiting.



 2. the remote client has terminated the connection as far as it is
concerned but not notified the server -- when you try to send data it will
reject it, and 

Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Van Enk
I'd suggested this in an earlier SoC thread.

2010/2/11 Matthias Görgens matthias.goerg...@googlemail.com

 Implementing an alternative RTS for GHC seems like a viable Google
 Summer of Code project to me.  What do you think?
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Van Enk
I'll definitely take a closer look.

On Thu, Feb 11, 2010 at 1:09 PM, John Meacham j...@repetae.net wrote:

 On Thu, Feb 11, 2010 at 06:57:48PM +0100, Henning Thielemann wrote:
  John Van Enk schrieb:
   I need to be able to swap out the RTS. The place I want to stick
 Haskell
   absolutely needs its own custom RTS, and currently, I don't think it's
   all that easy or clean to do that.
  
   Am I wrong? Are there resources describing how to do this already?
 
  As far as I know JHC is intended to work without an RTS.

 It is more that the RTS is generated as a part of the normal code
 generation process, this is done by implementing as much as possible in
 haskell itself, jhc has a very rich set of unboxed primitives, making it
 as expressible as c-- for the most part, for the bits of C I do need, I
 try to make them conditionally compilable, so parts that arn't used will
 not be included. all in all, the overhead is ~= 1k or so. A side effect
 is that jhc is very lightly coupled to any particular RTS, so
 experimenting with alternate ones is pretty straigtforward.

John

 --
 John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Alp Mestanogullari
It seems quite big for a 3 months project made by a student, though.

2010/2/11 Matthias Görgens matthias.goerg...@googlemail.com

 Implementing an alternative RTS for GHC seems like a viable Google
 Summer of Code project to me.  What do you think?




-- 
Alp Mestanogullari
http://alpmestan.wordpress.com/
http://alp.developpez.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Undecidable instances with functional dependencies

2010-02-11 Thread Henning Thielemann


I have the following class and instance

  class Register a r | a - r where

  instance (Register a ra, Register b rb) =
 Register (a,b) (ra,rb) where

and GHC refuses the instance because of violated Coverage Condition.
I have more instances like

  instance Register Int8  (Reg Int8)  where
  instance Register Word8 (Reg Word8) where

and for the set of instances I plan, the instance resolution will always 
terminate. I remember that the term 'undecidable instance' is not fixed 
and may be relaxed if a more liberal condition can be found. Is there a 
place, say a Wiki page, where we can collect examples where we think that 
the current check of GHC is too restrictive?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Undecidable instances with functional dependencies

2010-02-11 Thread Miguel Mitrofanov

-- {-# LANGUAGE FunctionalDependencies#-}
-- {-# LANGUAGE MultiParamTypeClasses #-}
{-# LANGUAGE TypeFamilies #-}
module Register where
-- class Register a r | a - r
class Register a where
type R a
-- instance Register Int Int
instance Register Int where
type R Int = Int
-- instance Register Float Float
instance Register Float where
type R Float = Float
-- instance (Register a1 r1, Register a2 r2) = Register (a1, a2) (r1,  
r2)

instance (Register a, Register b) = Register (a, b) where
type R (a, b) = (R a, R b)

On 12 Feb 2010, at 00:32, Henning Thielemann wrote:



I have the following class and instance

 class Register a r | a - r where

 instance (Register a ra, Register b rb) =
Register (a,b) (ra,rb) where

and GHC refuses the instance because of violated Coverage Condition.
I have more instances like

 instance Register Int8  (Reg Int8)  where
 instance Register Word8 (Reg Word8) where

and for the set of instances I plan, the instance resolution will  
always terminate. I remember that the term 'undecidable instance' is  
not fixed and may be relaxed if a more liberal condition can be  
found. Is there a place, say a Wiki page, where we can collect  
examples where we think that the current check of GHC is too  
restrictive?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread John Van Enk
Perhaps just defining the interface and demonstrating that different RTS's
are swappable would be enough?

2010/2/11 Alp Mestanogullari a...@mestan.fr

 It seems quite big for a 3 months project made by a student, though.

 2010/2/11 Matthias Görgens matthias.goerg...@googlemail.com

 Implementing an alternative RTS for GHC seems like a viable Google

 Summer of Code project to me.  What do you think?




 --
 Alp Mestanogullari
 http://alpmestan.wordpress.com/
 http://alp.developpez.com/

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: sendfile leaking descriptors on Linux?

2010-02-11 Thread Thomas DuBuisson
Bardur Arantsson s...@scientician.net wrote:
 ...
       then do errno - getErrno
               if errno == eAGAIN
                 then do
                    threadDelay 100
                    sendfile out_fd in_fd poff bytes
                 else throwErrno Network.Socket.SendFile.Linux
      else return (fromIntegral sbytes)

 That is, I removed the threadWaitWrite in favor of just adding a
 threadDelay 100 when eAGAIN is encountered.

 With this code, I cannot provoke the leak.

 Unfortunately this isn't really a solution -- the CPU is pegged at
 ~50% when I do this and it's not exactly elegant to have a hardcoded
 100 ms delay in there. :)

I don't think it matters wrt the desired final solution, but this is
NOT a 100 ms delay.  It is a 0.1 ms delay, which is less than a GHC
time slice and as such is basically a tight loop.  If you use a
reasonable value for the delay you will probably see the CPU being
almost completely idle.

Thomas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Evan Laforge
On Thu, Feb 11, 2010 at 1:49 PM, John Van Enk vane...@gmail.com wrote:
 Perhaps just defining the interface and demonstrating that different RTS's
 are swappable would be enough?

I read a paper by (I think) a Simon, in which he described a haskell
RTS.  It would make it easier to experiment with GC, scheduling, and
whatever else.  I recall a few problems, such as performance, but
nothing really intractable.  Swappable RTS would be a nice
side-effect.

Unfortunately I don't remember the title of the paper.  Maybe it had
to do with the whole GMP thing?

It might be big for SoC but perhaps there's some well-defined subset,
like fix some blocking issue?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type arithmetic with ATs/TFs

2010-02-11 Thread Andrew Coppin

Andrew Coppin wrote:

OK, so I sat down today and tried this, but I can't figure out how.

There are various examples of type-level arithmetic around the place. 
For example,


http://www.haskell.org/haskellwiki/Type_arithmetic

(This is THE first hit on Google, by the way. Haskell is apparently 
THAT popular!) But this does type arithmetic using functional 
dependencies; what I'm trying to figure out is how to do that with 
associated types.


Any hints?


Several people have now replied to this, both on and off-list. But all 
the replies use type families, not associated types.


Now type families are something I don't yet comprehend. (Perhaps the 
replies will help... I haven't studied them yet.) What I understand is 
that ATs allow you to write things like


 class Container c where
   type Element c :: *
   ...

And now you can explicitly talk about the kind of element a container 
can hold, rather than relying on the type constructor having a 
particular kind or something. So the above works for containers that can 
hold *anything* (such as lists), containers which can only hold *one* 
thing (e.g., ByteString), and containers which can hold only certain 
things (e.g., Set).


...which is great. But I can't see a way to use this for type 
arithmetic. Possibly because I don't have a dramatically solid mental 
model of exactly how it works. You'd *think* that something like


 class Add x y where
   type Sum x y :: *

 instance Add x y = Add (Succ x) y where
   type Sum (Succ x) y = Succ (Sum x y)

ought to work, but apparently not.

As to what type families - type declarations outside of a class - end up 
meaning, I haven't the vaguest idea. The Wiki page makes it sound 
increadibly complicated...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Happstack 0.4.1

2010-02-11 Thread Станислав Черничкин
I was unable to install happstack-data-0.4.1 on windows with GHC 6.12.1. 
Here the log:


Resolving dependencies...
Configuring happstack-data-0.4.1...
Preprocessing library happstack-data-0.4.1...
Preprocessing executables for happstack-data-0.4.1...
Building happstack-data-0.4.1...

src\Happstack\Data\GOps.hs:54:0:
 warning: no newline at end of file

src\Happstack\Data\SerializeTH.hs:88:0:
 warning: no newline at end of file
[ 1 of 16] Compiling Happstack.Data.GOps ( src\Happstack\Data\GOps.hs, 
dist\buil

d\Happstack\Data\GOps.o )

src\Happstack\Data\Serialize.hs:1:85:
Warning: -XPatternSignatures is deprecated: use 
-XScopedTypeVariables or pra

gma {-# LANGUAGE ScopedTypeVariables #-} instead

src\Happstack\Data\Xml\Base.hs:6:13:
Warning: -XPatternSignatures is deprecated: use 
-XScopedTypeVariables or pra

gma {-# LANGUAGE ScopedTypeVariables #-} instead
[ 2 of 16] Compiling Happstack.Data.Normalize ( 
src\Happstack\Data\Normalize.hs,

 dist\build\Happstack\Data\Normalize.o )
[ 3 of 16] Compiling Happstack.Data.Migrate ( 
src\Happstack\Data\Migrate.hs, dis

t\build\Happstack\Data\Migrate.o )
[ 4 of 16] Compiling Happstack.Data.Default ( 
src\Happstack\Data\Default.hs, dis

t\build\Happstack\Data\Default.o )
[ 5 of 16] Compiling Happstack.Data.DeriveAll ( 
src\Happstack\Data\DeriveAll.hs,

 dist\build\Happstack\Data\DeriveAll.o )
[ 6 of 16] Compiling Happstack.Data.Default.Generic ( 
src\Happstack\Data\Default

\Generic.hs, dist\build\Happstack\Data\Default\Generic.o )
[ 7 of 16] Compiling Happstack.Data.Xml.Base ( 
src\Happstack\Data\Xml\Base.hs, d

ist\build\Happstack\Data\Xml\Base.o )
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package array-0.3.0.0 ... linking ... done.
Loading package bytestring-0.9.1.5 ... linking ... done.
Loading package containers-0.3.0.0 ... linking ... done.
Loading package pretty-1.0.1.1 ... linking ... done.
Loading package template-haskell ... linking ... done.
Loading package syb-with-class-0.6.1 ... linking ... done.
Loading package HUnit-1.2.2.1 ... linking ... done.
Loading package syb-0.1.0.2 ... linking ... done.
Loading package base-3.0.3.2 ... linking ... done.
Loading package Win32-2.2.0.1 ... linking ... done.
Loading package old-locale-1.0.0.2 ... linking ... done.
Loading package time-1.1.4 ... linking ... done.
Loading package random-1.0.0.2 ... linking ... done.
Loading package QuickCheck-1.2.0.0 ... linking ... done.
Loading package extensible-exceptions-0.1.1.1 ... linking ... done.
Loading package mtl-1.1.0.2 ... linking ... done.
Loading package old-time-1.0.0.3 ... linking ... done.
Loading package parsec-2.1.0.1 ... linking ... done.
Loading package hsemail-1.3 ... linking ... done.
Loading package network-2.2.1.7 ... linking ... done.
Loading package SMTPClient-1.0.1 ... linking ... done.
Loading package filepath-1.1.0.3 ... linking ... done.
Loading package directory-1.0.1.0 ... linking ... done.
Loading package process-1.0.1.2 ... linking ... done.
Loading package hslogger-1.0.7 ... linking ... done.
Loading package deepseq-1.1.0.0 ... linking ... done.
Loading package parallel-2.2.0.1 ... linking ... done.
Loading package strict-concurrency-0.2.2 ... linking ... done.
Loading package unix-compat-0.1.2.1 ... linking ... done.
Loading package happstack-util-0.4.1 ... linking ... done.
Loading package binary-0.5.0.2 ... linking ... done.
Loading package haskell98 ... linking ... done.
Loading package HaXml-1.13.3 ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
ghc.exe: dist\build\Happstack\Data\Default.o: unknown symbol 
`_sybzmwithzmclassz

m0zi6zi1_DataziGenericsziSYBziWithClassziInstances_constrZMacbsZN_closure'

cabal: Error: some packages failed to install:
happstack-data-0.4.1 failed during the building phase. The exception was:
ExitFailure 1

I can create more detailed log if you need.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type arithmetic with ATs/TFs

2010-02-11 Thread Ryan Ingram
Actually, at least in GHC, associated types are just syntax sugar for
type families.

That is, this code:

class Container c where
   type Element c :: *
   view :: c - Maybe (Element c,c)

instance Container [a] where
   type Element [a] = a
   view [] = Nothing
   view (x:xs) = Just (x,xs)

is the same as this code:

type family Element c :: *
class Container c where
   view :: c - Maybe (Element c, c)
type instance Container [a] = a
instance Container [a] where
   view [] = Nothing
   view (x:xs) = Just (x,xs)

  -- ryan

On Thu, Feb 11, 2010 at 1:10 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
 Andrew Coppin wrote:

 OK, so I sat down today and tried this, but I can't figure out how.

 There are various examples of type-level arithmetic around the place. For
 example,

 http://www.haskell.org/haskellwiki/Type_arithmetic

 (This is THE first hit on Google, by the way. Haskell is apparently THAT
 popular!) But this does type arithmetic using functional dependencies; what
 I'm trying to figure out is how to do that with associated types.

 Any hints?

 Several people have now replied to this, both on and off-list. But all the
 replies use type families, not associated types.

 Now type families are something I don't yet comprehend. (Perhaps the replies
 will help... I haven't studied them yet.) What I understand is that ATs
 allow you to write things like

  class Container c where
   type Element c :: *
   ...

 And now you can explicitly talk about the kind of element a container can
 hold, rather than relying on the type constructor having a particular kind
 or something. So the above works for containers that can hold *anything*
 (such as lists), containers which can only hold *one* thing (e.g.,
 ByteString), and containers which can hold only certain things (e.g., Set).

 ...which is great. But I can't see a way to use this for type arithmetic.
 Possibly because I don't have a dramatically solid mental model of exactly
 how it works. You'd *think* that something like

  class Add x y where
   type Sum x y :: *

  instance Add x y = Add (Succ x) y where
   type Sum (Succ x) y = Succ (Sum x y)

 ought to work, but apparently not.

 As to what type families - type declarations outside of a class - end up
 meaning, I haven't the vaguest idea. The Wiki page makes it sound
 increadibly complicated...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Examples/docs for simulated annealing bindings

2010-02-11 Thread John Meacham
On Thu, Oct 15, 2009 at 02:41:42PM +0100, Dougal Stanton wrote:
 I found the HsASA library [1] on Hackage, but there's no documentation
 and it's not particularly intuitive. I can't see any obvious way of
 choosing initial config or generating new configurations. Google
 reveals no one using it. Does anyone have ideas?

Hi, performing ASA efficiently relies on a very efficient implementation
of the algorithm, the actual parameters of the ASA are hard coded as C
#defines, see the original C distribution for documentation on them. For
using HsASA in projects, it is recommended you directly include the
modules into your program and modify them with the specific parameters
appropriate to your task.

John

-- 
John Meacham - ⑆repetae.net⑆john⑈ - http://notanumber.net/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type arithmetic with ATs/TFs

2010-02-11 Thread Robert Greayer
What Ryan said, and here's an example of addition with ATs,
specifically (not thoroughly tested, but tested a little).  The
translation to TFs sans ATs is straightforward.

class Add a b where
type SumType a b

instance Add Zero Zero where
type SumType Zero Zero = Zero

instance Add (Succ a) Zero where
type SumType (Succ a) Zero = Succ a

instance Add Zero (Succ a) where
type SumType Zero (Succ a) = Succ a

instance Add (Succ a) (Succ b) where
type SumType (Succ a) (Succ b) = Succ (Succ (SumType a b))


On Thu, Feb 11, 2010 at 4:10 PM, Andrew Coppin
andrewcop...@btinternet.com wrote:
 Andrew Coppin wrote:

 OK, so I sat down today and tried this, but I can't figure out how.

 There are various examples of type-level arithmetic around the place. For
 example,

 http://www.haskell.org/haskellwiki/Type_arithmetic

 (This is THE first hit on Google, by the way. Haskell is apparently THAT
 popular!) But this does type arithmetic using functional dependencies; what
 I'm trying to figure out is how to do that with associated types.

 Any hints?

 Several people have now replied to this, both on and off-list. But all the
 replies use type families, not associated types.

 Now type families are something I don't yet comprehend. (Perhaps the replies
 will help... I haven't studied them yet.) What I understand is that ATs
 allow you to write things like

  class Container c where
   type Element c :: *
   ...

 And now you can explicitly talk about the kind of element a container can
 hold, rather than relying on the type constructor having a particular kind
 or something. So the above works for containers that can hold *anything*
 (such as lists), containers which can only hold *one* thing (e.g.,
 ByteString), and containers which can hold only certain things (e.g., Set).

 ...which is great. But I can't see a way to use this for type arithmetic.
 Possibly because I don't have a dramatically solid mental model of exactly
 how it works. You'd *think* that something like

  class Add x y where
   type Sum x y :: *

  instance Add x y = Add (Succ x) y where
   type Sum (Succ x) y = Succ (Sum x y)

 ought to work, but apparently not.

 As to what type families - type declarations outside of a class - end up
 meaning, I haven't the vaguest idea. The Wiki page makes it sound
 increadibly complicated...

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Niklas Broberg
 Anyone know of a type inference utility that can run right on haskell-src
 types? or one that could be easily adapted?

This is very high on my wish-list for haskell-src-exts, and I'm hoping
the stuff Lennart will contribute will go a long way towards making it
feasible. I believe I can safely say that no such tool exists (and if
it does, why haven't you told me?? ;-)), but if you implement (parts
of) one yourself I'd be more than interested to see, and incorporate,
the results.

Cheers,

/Niklas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Erik de Castro Lopo
Michael Lesniak wrote:

 Hello,
 
 
  elegance of Haskell. Whether Haskell becomes an easy choice for
  commercial work or remains a boutique language depends on how easy it
  is to build today's applications.
 
 Do you (or anyone reading this thread) know of some kind of wishlist
 of missing features and/or libraries? Would be nice to see what's
 still missing.

HTTPS support in the HTTP library. One library that JustWorks (tm) for
HTTP and HTTPS.

Erik
-- 
--
Erik de Castro Lopo
http://www.mega-nerd.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Haskell-Cafe Digest, Vol 78, Issue 14

2010-02-11 Thread John Lato
On Thu, Feb 11, 2010 at 10:00 AM, Gregory Collins
g...@gregorycollins.net wrote:
 Maciej Piechotka uzytkown...@gmail.com writes:

 On Tue, 2010-02-09 at 16:41 +, John Lato wrote:

 See http://inmachina.net/~jwlato/haskell/ParsecIteratee.hs for a valid
 Stream instance using iteratee.  Also Gregory Collins recently posted
 an iteratee wrapper for Attoparsec to haskell-cafe.  To my knowledge
 these are not yet in any packages, but hackage is vast.

 Hmm. Am I correct that his implementation caches everything?

 The one that John posted (iteratees on top of parsec) has to keep a copy
 of the entire input, because parsec wants to be able to do arbitrary
 backtracking on the stream.

This is true, however I believe this alternative approach is also
correct.  The Cursor holds the stream state, and parsec holds on to
the Cursor for backtracking.  Data is only read within the Iteratee
monad when it goes beyond the currently available cursors, at which
point another cursor is added to the linked list (implemented with
IORef or other mutable reference).

The downside to this approach is that data is consumed from the
iteratee stream for a partial parse, even if the parse fails.  I did
not want this behavior, so I chose a different approach.


 I tried to rewrite the implementation using... well imperative linked
 list. For trivial benchmark it have large improvement (althought it may
 be due to error in test such as using ByteString) and, I believe, that
 it allows to free memory before finish.

 Results of test on Core 2 Duo 2.8 GHz:
 10:   0.000455s       0.000181s
 100:  0.000669s       0.001104s
 1000: 0.005209s       0.023704s
 1:        0.053292s       1.423443s
 10:       0.508093s       132.208597s


I'm surprised your version has better performance for small numbers of
elements.  I wonder if it's partially due to more aggressive inlining
from GHC or something of that nature.  Or maybe your version compiles
to a tighter loop as elements can be gc'd.

I expected poor performance of my code for larger numbers of elements,
as demonstrated here.

I envisioned the usage scenario where parsers would be relatively
short (20 chars), and most of the work would be done directly with
iteratees.  In this case it would be more important to preserve the
stream state in the case of a failed parse, and the performance issues
of appending chunks wouldn't arise either.

Cheers,
John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Happstack 0.4.1

2010-02-11 Thread Jeremy Shaw

As a work around install happstack-data like this:

cabal install -O0 happstack-data


More details at the end of this thread:

http://groups.google.com/group/happs/browse_thread/thread/c66c74294d8eabf/a4de5e67853925e0?lnk=gstq=happstack-data#a4de5e67853925e0

I am using GHC 6.13, and I believe this problem has been resolved. Not  
sure what the best work around is in the meantime though. I guess I  
should at least update the download page to mention it..


- jeremy

On Feb 11, 2010, at 4:19 PM, Станислав Черничкин  
wrote:


I was unable to install happstack-data-0.4.1 on windows with GHC  
6.12.1. Here the log:


Resolving dependencies...
Configuring happstack-data-0.4.1...
Preprocessing library happstack-data-0.4.1...
Preprocessing executables for happstack-data-0.4.1...
Building happstack-data-0.4.1...

src\Happstack\Data\GOps.hs:54:0:
 warning: no newline at end of file

src\Happstack\Data\SerializeTH.hs:88:0:
 warning: no newline at end of file
[ 1 of 16] Compiling Happstack.Data.GOps ( src\Happstack\Data 
\GOps.hs, dist\buil

d\Happstack\Data\GOps.o )

src\Happstack\Data\Serialize.hs:1:85:
Warning: -XPatternSignatures is deprecated: use - 
XScopedTypeVariables or pra

gma {-# LANGUAGE ScopedTypeVariables #-} instead

src\Happstack\Data\Xml\Base.hs:6:13:
Warning: -XPatternSignatures is deprecated: use - 
XScopedTypeVariables or pra

gma {-# LANGUAGE ScopedTypeVariables #-} instead
[ 2 of 16] Compiling Happstack.Data.Normalize ( src\Happstack\Data 
\Normalize.hs,

 dist\build\Happstack\Data\Normalize.o )
[ 3 of 16] Compiling Happstack.Data.Migrate ( src\Happstack\Data 
\Migrate.hs, dis

t\build\Happstack\Data\Migrate.o )
[ 4 of 16] Compiling Happstack.Data.Default ( src\Happstack\Data 
\Default.hs, dis

t\build\Happstack\Data\Default.o )
[ 5 of 16] Compiling Happstack.Data.DeriveAll ( src\Happstack\Data 
\DeriveAll.hs,

 dist\build\Happstack\Data\DeriveAll.o )
[ 6 of 16] Compiling Happstack.Data.Default.Generic ( src\Happstack 
\Data\Default

\Generic.hs, dist\build\Happstack\Data\Default\Generic.o )
[ 7 of 16] Compiling Happstack.Data.Xml.Base ( src\Happstack\Data\Xml 
\Base.hs, d

ist\build\Happstack\Data\Xml\Base.o )
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Loading package array-0.3.0.0 ... linking ... done.
Loading package bytestring-0.9.1.5 ... linking ... done.
Loading package containers-0.3.0.0 ... linking ... done.
Loading package pretty-1.0.1.1 ... linking ... done.
Loading package template-haskell ... linking ... done.
Loading package syb-with-class-0.6.1 ... linking ... done.
Loading package HUnit-1.2.2.1 ... linking ... done.
Loading package syb-0.1.0.2 ... linking ... done.
Loading package base-3.0.3.2 ... linking ... done.
Loading package Win32-2.2.0.1 ... linking ... done.
Loading package old-locale-1.0.0.2 ... linking ... done.
Loading package time-1.1.4 ... linking ... done.
Loading package random-1.0.0.2 ... linking ... done.
Loading package QuickCheck-1.2.0.0 ... linking ... done.
Loading package extensible-exceptions-0.1.1.1 ... linking ... done.
Loading package mtl-1.1.0.2 ... linking ... done.
Loading package old-time-1.0.0.3 ... linking ... done.
Loading package parsec-2.1.0.1 ... linking ... done.
Loading package hsemail-1.3 ... linking ... done.
Loading package network-2.2.1.7 ... linking ... done.
Loading package SMTPClient-1.0.1 ... linking ... done.
Loading package filepath-1.1.0.3 ... linking ... done.
Loading package directory-1.0.1.0 ... linking ... done.
Loading package process-1.0.1.2 ... linking ... done.
Loading package hslogger-1.0.7 ... linking ... done.
Loading package deepseq-1.1.0.0 ... linking ... done.
Loading package parallel-2.2.0.1 ... linking ... done.
Loading package strict-concurrency-0.2.2 ... linking ... done.
Loading package unix-compat-0.1.2.1 ... linking ... done.
Loading package happstack-util-0.4.1 ... linking ... done.
Loading package binary-0.5.0.2 ... linking ... done.
Loading package haskell98 ... linking ... done.
Loading package HaXml-1.13.3 ... linking ... done.
Loading package ffi-1.0 ... linking ... done.
ghc.exe: dist\build\Happstack\Data\Default.o: unknown symbol  
`_sybzmwithzmclassz
m0zi6zi1_DataziGenericsziSYBziWithClassziInstances_constrZMacbsZN_closure 
'


cabal: Error: some packages failed to install:
happstack-data-0.4.1 failed during the building phase. The exception  
was:

ExitFailure 1

I can create more detailed log if you need.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: sendfile leaking descriptors on Linux?

2010-02-11 Thread Jeremy Shaw


On Feb 11, 2010, at 1:57 PM, Bardur Arantsson wrote:



2. the remote client has terminated the connection as far as it is
concerned but not notified the server -- when you try to send data  
it will

reject it, and send/write/sendfile/etc will raise sigPIPE.
Looking at your debug output, we are seeing the sigPIPE / Broken  
Pipe error
most of the time. But then there is the case where we get stuck on  
the

threadWaitWrite.
threadWaitWrite is ultimately implemented by passing the file  
descriptor to
the list of write descriptors in a call to select(). It seems,  
however, that
select() is not waking up just because calling write() on a file  
descriptor

*would* cause sigPIPE.


That's what I expect select() with an errfd FDSET would do.


Nope. The expectfds are only trigger in esoteric conditions. For TCP  
sockets, I think it only occurs if there is out-of-band data available  
to be read via recv() with the MSG_OOB flag.


http://uw714doc.sco.com/en/SDK_netapi/sockC.OoBdata.html

The easiest way to confirm this case is probably to write a small,  
pure C

program and see what really happens.
If this is the case, then it means the only way to tell if the  
client has
abruptly dropped the connection is to actually try sending the data  
and see
if the sending function calls sigPIPE. And that means doing some  
sort of

polling/timeout?


Correct, but the trouble is deciding how often to poll and/or how  
long the timeout should be.


I don't see any easy answer to that. That's why my suggested  
solution is to simply punt it to the OS (by using portable mode)  
and suck up the extra overhead of the portable solution. Hopefully  
the new GHC I/O manager will make it possible to have a proper  
solution.


The whole point of the sendfile library is to use sendfile(), so not  
using sendfile() seems like the wrong solution. I am also not  
convinced that the new GHC I/O manager will do anything new to make it  
possible to have a proper solution. I believe we would be seeing the  
same error even in pure C, so we need to know the work around that  
works in pure C as well. I am not convinced we are punting to the OS  
by using portable mode either (more below).


I do not have a good explanation as to why the portable version  
does not
fail. Except maybe it is just so slow that it does not ever fill up  
the

buffer, and hence does not get stuck in threadWaitWrite?


The portable version doesn't call threadWaitWrite. It simply turns  
the Socket into a handle (which causes it to become blocking)  and  
so the kernel is tasked with handling all the gritty details.


The portable version does not directly call threadWaitWrite, but it  
still calls it.


Data.ByteString.Char8.hPutStr calls
Data.ByteString.hPut which calls
Data.ByteString.hPutBuf which calls
System.IO.hPutBuf which calls
GHC.IO.Handle.Text.hPutBuf which calls
GHC.IO.Handle.bufWrite.Text which calls
GHC.IO.Device.write which calls
GHC.IO.FD.fdWrite which calls
GHC.IO.FD.writeRawBufferPtr which calls

which is defined as:

writeRawBufferPtr :: String - FD - Ptr Word8 - Int - CSize - IO  
CInt

writeRawBufferPtr loc !fd buf off len
  | isNonBlocking fd = unsafe_write -- unsafe is ok, it can't block
  | otherwise   = do r - unsafe_fdReady (fdFD fd) 1 0 0
 if r /= 0
then write
else do threadWaitWrite (fromIntegral (fdFD  
fd)); write

  where
do_write call = fromIntegral `fmap`
  throwErrnoIfMinus1RetryMayBlock loc call
(threadWaitWrite (fromIntegral (fdFD fd)))
write = if threaded then safe_write else unsafe_write
unsafe_write  = do_write (c_write (fdFD fd) (buf `plusPtr` off)  
len)
safe_write= do_write (c_safe_write (fdFD fd) (buf `plusPtr`  
off) len)


According to the following test program, I expect that 'isNonBlocking  
fd' will be 'True'. So it seems like the portable solution should be  
vulnerable to the same condition. Perhaps the portable version is just  
so slow that the OS buffers never fill up so EAGAIN is never raised?


---

{-# LANGUAGE RecordWildCards #-}
module Main where

import Control.Concurrent (forkIO)
import Control.Monad (forever)
import Network (PortID(PortNumber), Socket, listenOn)
import Network.Socket (accept, socketToHandle)
import System.IO
import qualified GHC.IO.FD as FD
import GHC.IO.Handle.Internals (withHandle, flushWriteBuffer)
import GHC.IO.Handle.Types (Handle__(..), HandleType(..))
import qualified GHC.IO.FD as FD
import System.Posix.Types (Fd(..))
import System.IO.Error
import GHC.IO.Exception
import Data.Typeable (cast)
import GHC.IO.Handle.Internals (wantWritableHandle)

main =
  listen (PortNumber (toEnum 2525)) $ \s -
 do h - socketToHandle s ReadWriteMode
wantWritableHandle main h $ \h_ - showBlocking h_


showBlocking :: Handle__ - IO ()

[Haskell-cafe] Re: sendfile leaking descriptors on Linux?

2010-02-11 Thread Bardur Arantsson

Thomas DuBuisson wrote:

Bardur Arantsson s...@scientician.net wrote:

...
  then do errno - getErrno
  if errno == eAGAIN
then do
   threadDelay 100
   sendfile out_fd in_fd poff bytes
else throwErrno Network.Socket.SendFile.Linux
 else return (fromIntegral sbytes)

That is, I removed the threadWaitWrite in favor of just adding a
threadDelay 100 when eAGAIN is encountered.

With this code, I cannot provoke the leak.

Unfortunately this isn't really a solution -- the CPU is pegged at
~50% when I do this and it's not exactly elegant to have a hardcoded
100 ms delay in there. :)


I don't think it matters wrt the desired final solution, but this is
NOT a 100 ms delay.  It is a 0.1 ms delay, which is less than a GHC
time slice and as such is basically a tight loop.  If you use a
reasonable value for the delay you will probably see the CPU being
almost completely idle.



Excellent, thanks. I was probably too tired or annoyed when I wrote that 
code. I sorta-kinda-knew I must have been doing *something* wrong :).


I'll retry with a more reasonable delay tomorrow.

Cheers,

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Seen on reddit: or, foldl and foldr considered slightly harmful

2010-02-11 Thread Richard O'Keefe


On Feb 11, 2010, at 9:41 PM, Johann Höchtl wrote:


In a presentation of Guy Steele for ICFP 2009 in Edinburgh:
http://www.vimeo.com/6624203
he considers foldl and foldr harmful as they hinder parallelism
because of Process first element, then the rest Instead he proposes
a divide and merge aproach, especially in the light of going parallel.


I think I first heard about that issue in the 80s.

Let me just take an Erlang perspective on this for a few minutes.

Ignoring types, the classic foldl in Erlang (which is strict) is

foldl(F, A, [X|Xs]) - foldl(F, F(X, A), Xs);
foldl(_, A, []) - A.

In a strict language, you have to wait for F to finish
before you can go on to the next element.
In a non-strict language, you don't.  The interesting
question is how far F can get just given X, before it
has to look at A.

Suppose we can factor F(X, A) into G(H(X), A)
where H is rather time-consuming, but G is cheap (maybe it's an add).

So we write

foldl_map(G, H, A, [X|Xs]) - foldl_map(G, H, G(H(X), A), Xs);
foldl_map(_, _, A, []) - A.

Now we can parallelise it.  We'll spark off a separate thread
for each call of H.

pfoldl_map(G, H, A, Xs) -
Outer = self(),
Pids = [spawn(fun () - Outer ! {self(),H(X)} end || X - Xs],
foldl(G, A, [receive {Pid,Val} - Val end | Pid - Pids]).

If N is the length of the list and G is O(1)
we have O(N) time to traverse the list
and O(N) time to collect and fold the results.
The H calculations can take place in parallel.

Provided each H calculation is expensive enough, we may not _care_
about the O(N) bits.  In fact, if they aren't, we probably shouldn't
be worrying about parallelism here.

The problem exists when we *can't* factor F(X, A) into G(H(X), A)
with a cheap G and costly H.  Then divide-and-parallel-conquer
using k-way trees gives us a computation depth of $\log_k N$
calls of G waiting for results to come back.

As I say, I met this stuff in the 80s.  It's hardly news.

Indeed, if I remember correctly, back when Occam was hot stuff
people were worried about
PAR i = 0 for n
   ...
which forks n processes.  Doing that one at a time in the parent
process takes O(n) time.  I believe something was done about
making this work by recursive bisection.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: sendfile leaking descriptors on Linux?

2010-02-11 Thread Bardur Arantsson

Jeremy Shaw wrote:


On Feb 11, 2010, at 1:57 PM, Bardur Arantsson wrote:



[--snip lots of technical info--]

Thanks for digging so much into this.

Just a couple of comments:



The whole point of the sendfile library is to use sendfile(), so not 
using sendfile() seems like the wrong solution.


Heh, well, presumably it could still use sendfile() only platforms where 
it can actually guarantee correctness :).




There is some evidence that when you are doing select() on a readfds, 
and the connection is closed, select() will indicate that the fds is 
ready to be read, but when you read it, you get 0-bytes. That indicates 
that a disconnect has happened. However, if you are only doing 
read()/recv(), I expect that only happens in the event of a proper 
disconnect, because if you are just listening for packets, there is no 
way to tell the difference between the sender just not saying anything, 
and the sender dying:


True, but the point here is that the OS has a built-in timeout mechanism 
(via keepalives) and *can* tell the program when that timeout has elapsed.


That's the timeout we're trying to get at instead of having to 
implement a new one.


Good point about the the readfds triggering when the client disconnects. 
I think that's what I've been seeing in all my other network-related 
code and I just misremembered the details. All my code is extremely 
likely to have been both reading and writing from (roughly) the same set 
of FDs at the same time.


If this method of detection is correct, then what we need is a 
threadWaitReadWrite, that will notify us if the socket can be read or 
written. The IO manager does not currently provide a function like 
that.. but we could fake it like this: (untested):


import Control.Concurrent
import Control.Concurrent.MVar
import System.Posix.Types

data RW = Read | Write

threadWaitReadWrite :: Fd - IO RW
threadWaitReadWrite fd =
  do m - newEmptyMVar
 rid - forkIO $ threadWaitRead fd   putMVar m Read
 wid - forkIO $ threadWaitWrite fd  putMVar m Write
 r - takeMVar m
 killThread rid
 killThread wid
 return r



I'll try to get the sendfile code to use this instead. AFAICT it 
shouldn't actually be necessary to peek on the read end of the socket 
to detect that something has gone wrong. We're guaranteed that 
sendfile() to a connection that's died (according to the OS, either due 
to proper disconnect or a timeout) will fail.


I might get a bit tricky to use this if the client is actually expecting 
to send proper data while the sendfile() is in progress -- if there's 
actual data to be read from the socket() then the naive replace 
threadWaitR by threadWaitRW will end up busy-waiting on EAGAIN since 
the socket() will be readable every time

threadWaitReadWrite gets called.

HOWEVER, that's not an issue in my particular scenario, so a simple 
relacement of threadWaitWrite by threadWaitReadWrite should do fine for 
testing purposes.


Of course, in the case where the client disconnects because someone 
turns off the power or pulls the ethernet cable, we have no way of 
knowing what is going on -- so there is still the possibility that dead 
connections will be left open for a long time.


True, but then it's (properly) left to the OS to decide and timeouts can 
be controlled via setsockopt -- as they should IMO.


I'll test tomorrow.

What I'll expect is that I'll still see a few dead threads lingering 
around for ~60 seconds (the OS-based timeout), but that I'll not see any
threads lingering indefinitely -- something which usually happens after 
a few hours of persistent use of my media server.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Don Stewart
bos:
 I'm thinking of switching the statistics library over to using vector. uvector
 is pretty bit-rotted in comparison to vector at this point, and it's really
 seeing no development, while vector is The Shiny Future. Roman, would you call
 the vector library good enough to use in production at the moment?

uvector's not seeing much development, but at least in the last round of
benchmarks it was still consistently faster -- since it's been
micro-optimized. 

Also, we have uvector-algorithms, so you can sort etc. a uvector.

I'm not sure its the long term solution, but its a simpler, faster lib
at  the moment, with more surrounding support, users and documentation.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Don Stewart
rl:
 On 11/02/2010, at 05:03, Bryan O'Sullivan wrote:
 
  I'm thinking of switching the statistics library over to using vector. 
  uvector is pretty bit-rotted in comparison to vector at this point, and 
  it's really seeing no development, while vector is The Shiny Future. Roman, 
  would you call the vector library good enough to use in production at the 
  moment?
 
 Yes, with the caveat that I haven't really used it in production code
 (I have tested and benchmarked it, though). BTW, I'll release version
 0.5 as soon as get a code.haskell.org account and move the repo there.
 

That's the main problem. I think we could move to vector as a whole, if
the suite of testing/ performance/documentation stuff from uvector was ported. 

Maybe this is a good job for the hackathon.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Dungeons of Wor - a largish FRP example and a fun game, all in one!

2010-02-11 Thread Thomas Hartman
1) This is missing the obligatory youtube video.

2) AWESOME! :)

thomas.

2010/2/11 Patai Gergely patai_gerg...@fastmail.fm:
 Hello all,

 I just uploaded the first public version of Dungeons of Wor [1], a
 homage to the renowned three-decade-old arcade game, Wizard of Wor.
 While it makes a fine time killer if you have a few minutes to spare, it
 might be of special interest to the lost souls who are trying to figure
 out FRP. The game was programmed using the Simple version of the
 experimental branch of Elerea [2], which provides first-class discrete
 streams to describe time-varying quantities, and the main game logic is
 described as a composition of streams instead of a world state
 transformer. Developing in this manner was an interesting experience,
 and I'll write about it in more detail over the weekend.

 All the best,

 Gergely

 [1] http://hackage.haskell.org/package/dow
 [2]
 http://hackage.haskell.org/packages/archive/elerea/1.2.3/doc/html/FRP-Elerea-Experimental-Simple.html

 --
 http://www.fastmail.fm - The professional email service

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Dan Doel
On Thursday 11 February 2010 12:43:10 pm stefan kersten wrote:
 On 10.02.10 19:03, Bryan O'Sullivan wrote:
  I'm thinking of switching the statistics library over to using vector.
 
 that would be even better of course! an O(0) solution, at least for me ;)
 let me know if i can be of any help (e.g. in testing). i suppose
 uvector-algorithms would also need to be ported to vector, then.

I could do this. I've been occupied with things other than uvector-algorithms 
for a while, but I've been meaning to get back into it (perhaps finally get 
timsort in there).

How widespread is the consensus on vector over uvector? dons seems to have 
added to uvector as recently as mid December, so I'm not really sure how bit 
rotted it is. But vector seems to have a lot more going on in it, including 
boxed arrays, which I suppose is a gap in using uvector.

I also notice that vector seems to have discarded the idea of

  Vec (A * B) = Vec A * Vec B

with associated types. Was this determined to not be worth it? uvector-
algorithms actually used the fact for a cute trick (Schwartzian transform can 
be done for such arrays by computing a new array containing 'f e' for each 'e' 
in the original array, pairing up the two arrays, and performing an algorithm 
that only looks at the 'f e' half, and then pulling the 'e' half out of the 
pair; doing it this way requires no copying of the original array).

Anyhow, if vector is the clear way forward, I don't mind porting uvector-
algorithms. But I don't relish maintaining two slightly different parallel 
branches.

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Hackage download statistis

2010-02-11 Thread Don Stewart
mauricio.antunes:
 Hi, all,

 Some time ago a download statistic of hackage was made available,
 and analysed in a few ways. Googling for it still find that at a
 Galois web page.

 I though that, since the tools to get that were there, this would
 be output once in a while, but it seems it hasn't been done since
 then.

 Does anyone know if there are plans about that?

Since we moved Hackage to a new machine, the logs are stored
differently. I need to get access to the machine to get  the stats
again. Hopefully this weekend.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: ANN: Dungeons of Wor - a largish FRP example and a fun game, all in one!

2010-02-11 Thread Simon Michael

Exciting! But on a mac, I can't get the window to become focussed or accept 
input. Tips ?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Roman Leshchinskiy
On 12/02/2010, at 12:39, Don Stewart wrote:

 bos:
 I'm thinking of switching the statistics library over to using vector. 
 uvector
 is pretty bit-rotted in comparison to vector at this point, and it's really
 seeing no development, while vector is The Shiny Future. Roman, would you 
 call
 the vector library good enough to use in production at the moment?
 
 uvector's not seeing much development, but at least in the last round of
 benchmarks it was still consistently faster -- since it's been
 micro-optimized.

FWIW, the development version of vector is usually faster the both uvector and 
dph-prim-seq, at least for the development version of NoSlow.

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Don Stewart
rl:
 On 12/02/2010, at 12:39, Don Stewart wrote:
 
  bos:
  I'm thinking of switching the statistics library over to using vector. 
  uvector
  is pretty bit-rotted in comparison to vector at this point, and it's really
  seeing no development, while vector is The Shiny Future. Roman, would you 
  call
  the vector library good enough to use in production at the moment?
  
  uvector's not seeing much development, but at least in the last round of
  benchmarks it was still consistently faster -- since it's been
  micro-optimized.
 
 FWIW, the development version of vector is usually faster the both
 uvector and dph-prim-seq, at least for the development version of
 NoSlow.

Ah ha -- that's useful. Public benchmarks soon? In time for the Zurich
Hackathon?? (March 20)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Roman Leshchinskiy
On 12/02/2010, at 12:40, Don Stewart wrote:

 rl:
 On 11/02/2010, at 05:03, Bryan O'Sullivan wrote:
 
 I'm thinking of switching the statistics library over to using vector. 
 uvector is pretty bit-rotted in comparison to vector at this point, and 
 it's really seeing no development, while vector is The Shiny Future. Roman, 
 would you call the vector library good enough to use in production at the 
 moment?
 
 Yes, with the caveat that I haven't really used it in production code
 (I have tested and benchmarked it, though). BTW, I'll release version
 0.5 as soon as get a code.haskell.org account and move the repo there.
 
 
 That's the main problem. I think we could move to vector as a whole, if
 the suite of testing/ performance/documentation stuff from uvector was ported.

Hmm, I'm not sure what you mean here. Mostly thanks to Max Bolingbroke's 
efforts, vector has a fairly extensive testsuite. I benchmark it a lot (with 
NoSlow) and haven't found any significant performance problems in a while. As 
to documentation, there are comments for most of the functions :-)

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Don Stewart
rl:
 On 12/02/2010, at 12:39, Don Stewart wrote:
 
  bos:
  I'm thinking of switching the statistics library over to using vector. 
  uvector
  is pretty bit-rotted in comparison to vector at this point, and it's really
  seeing no development, while vector is The Shiny Future. Roman, would you 
  call
  the vector library good enough to use in production at the moment?
  
  uvector's not seeing much development, but at least in the last round of
  benchmarks it was still consistently faster -- since it's been
  micro-optimized.
 
 FWIW, the development version of vector is usually faster the both
 uvector and dph-prim-seq, at least for the development version of
 NoSlow.

If Roman declares the vector to be faster -- my main concern here for
flat uarrays -- and makes the repo available so we can work on it, I'd
be willing to merge uvector's tests and docs and extra array operations
in.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Roman Leshchinskiy
On 12/02/2010, at 12:54, Dan Doel wrote:

 I also notice that vector seems to have discarded the idea of
 
  Vec (A * B) = Vec A * Vec B

Oh no, it hasn't. In contrast to uvector/DPH, which use a custom strict tuple 
type for  rather outdated reasons, vector uses normal tuples. For instance, 
Data.Vector.Unboxed.Vector (a,b,c) is internally represented as a triple of 
unboxed vectors of a, b and c. In general, vector supports 4 kinds of arrays at 
the moment:

Data.Vector.Primitive wrappers around ByteArray#, can store primitive types
Data.Vector.Unboxed   uses type families, can store everything 
D.V.Primitive can
  plus tuples and can be extended for user-defined types
Data.Vector.Storable  wrappers around ForeignPtr, can store Storable things
Data.Vector   boxed arrays

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: hledger 0.8 released

2010-02-11 Thread Simon Michael

hledger 0.8 is out!

http://hledger.org
http://hledger.org/MANUAL.html#installing

Bug fixes, refactoring and Hi-Res Graphical Charts. (See Roman  
Cheplyaka's blog: http://www.reddit.com/r/haskell/comments/b0w0q/using_the_hledger_package_to_track_finances)


Best - Simon


Release notes:
..

  * parsing: in date=date2, use first date's year as a default for  
the second


  * add: ctrl-d doesn't work on windows, suggest ctrl-c instead

  * add: --no-new-accounts option disallows new accounts (Roman  
Cheplyaka)


  * add: re-use the previous transaction's date as default (Roman  
Cheplyaka)


  * add: a command-line argument now filters by account during  
history matching (Roman Cheplyaka)


  * chart: new command, generates balances pie chart (requires - 
fchart flag, gtk2hs) (Roman Cheplyaka, Simon Michael)


  * register: make reporting intervals honour a display expression  
(#18)


  * web: fix help link

  * web: use today as default when adding with a blank date

  * web: re-enable account/period fields, they seem to be fixed,  
along with file re-reading (#16)


  * web: get static files from the cabal data dir, or the current dir  
when using make (#13)


  * web: preserve encoding during add, assuming it's utf-8 (#15)

  * fix some non-utf8-aware file handling (#15)

  * filter ledger again for each command, not just once at program  
start


  * refactoring, clearer data types

  Stats:
  62 days since last release,
  2 contributors,
  76 commits,
  3464 lines of non-test code,
  97 tests,
  53% test coverage

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Dan Doel
On Thursday 11 February 2010 9:57:40 pm Roman Leshchinskiy wrote:
 Oh no, it hasn't. In contrast to uvector/DPH, which use a custom strict
 tuple type for  rather outdated reasons, vector uses normal tuples. For
 instance, Data.Vector.Unboxed.Vector (a,b,c) is internally represented as
 a triple of unboxed vectors of a, b and c. In general, vector supports 4
 kinds of arrays at the moment:

Ah, all right. I was looking at the (0.4.2) documentation on hackage, which 
doesn't mention Data.Vector.Unboxed.

Never mind about that bit, then.
-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: hledger 0.8 released

2010-02-11 Thread Brandon S. Allbery KF8NH

On Feb 11, 2010, at 22:02 , Simon Michael wrote:

 * add: ctrl-d doesn't work on windows, suggest ctrl-c instead



Ctrl-Z would be the usual EOF in the Windows world, fwiw.

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allb...@kf8nh.com
system administrator [openafs,heimdal,too many hats] allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon universityKF8NH




PGP.sig
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Thomas Hartman
Hear hear.

But a few successful happstack private sector startups could change that...

2010/2/10 Jason Dusek jason.du...@gmail.com:
 2010/02/10 Roderick Ford develo...@live.com:
 A U.S. president would probably subsidize such a job-creating endeavor too!

  The US government generally subsidizes these kinds of things
  through DoD spending (and a few NSF grants). That is probably
  hard to get into.

 --
 Jason Dusek
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] vector to uvector and back again

2010-02-11 Thread Roman Leshchinskiy
On 12/02/2010, at 13:49, Don Stewart wrote:

 rl:
 On 12/02/2010, at 12:39, Don Stewart wrote:
 
 bos:
 I'm thinking of switching the statistics library over to using vector. 
 uvector
 is pretty bit-rotted in comparison to vector at this point, and it's really
 seeing no development, while vector is The Shiny Future. Roman, would you 
 call
 the vector library good enough to use in production at the moment?
 
 uvector's not seeing much development, but at least in the last round of
 benchmarks it was still consistently faster -- since it's been
 micro-optimized.
 
 FWIW, the development version of vector is usually faster the both
 uvector and dph-prim-seq, at least for the development version of
 NoSlow.
 
 Ah ha -- that's useful. Public benchmarks soon? In time for the Zurich
 Hackathon?? (March 20)

I've been trying to find the time to put the benchmarks on my blog since the 
beginning of January but, alas, unsuccessfully so far. In any case, vector and 
NoSlow currently live in

  http://www.cse.unsw.edu.au/~rl/code/darcs/vector
  http://www.cse.unsw.edu.au/~rl/code/darcs/NoSlow

 If Roman declares the vector to be faster -- my main concern here for
 flat uarrays -- and makes the repo available so we can work on it, I'd
 be willing to merge uvector's tests and docs and extra array operations
 in.

It is generally faster than dph-prim-seq. Benchmarking against uvector is a bit 
difficult because it's missing operations necessary for implementing most of 
the algorithms in NoSlow (in particular, bulk updates). For the ones that 
uvector supports, vector tends to be faster.

BTW, this is for unsafe operations which don't use bounds checking. Bounds 
checking can make things a little slower but often doesn't cost anything as 
long as only collective operations are used. Sometimes it makes things faster 
which means that the simplifier still gets confused in some situations. There 
are also some significant differences between 6.12 and the HEAD (the HEAD is 
much more predictable).

In general, I find it hard to believe that the performance differences I'm 
seeing really matter all that much in real-world programs.

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Jason Dusek
  Things are missing but Haskell was certainly fit for
  practical use two years ago.

  The big things missing now are trust, mindshare and
  enough people who think reliability and consistency
  are a good play for long term productivity.

--
Jason Dusek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell-src type inference algorithm?

2010-02-11 Thread Bernie Pope
On 12 February 2010 10:13, Niklas Broberg niklas.brob...@gmail.com wrote:
 Anyone know of a type inference utility that can run right on haskell-src
 types? or one that could be easily adapted?

 This is very high on my wish-list for haskell-src-exts, and I'm hoping
 the stuff Lennart will contribute will go a long way towards making it
 feasible. I believe I can safely say that no such tool exists (and if
 it does, why haven't you told me?? ;-)), but if you implement (parts
 of) one yourself I'd be more than interested to see, and incorporate,
 the results.

A long time ago I worked on hatchet:

   http://www.cs.mu.oz.au/~bjpop/hatchet/src/hatchet.tar.gz

which I believe was incorporated into JHC.

Hatchet was based on thih and haskell-src.

I gave up on it when I figured out a way to do what I wanted without
type information.

If I was going to do it again then I'd consider using Chameleon as a
starting point, (I don't know where the most up-to-date sources are).

Cheers,
Bernie.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Computer Camp for kids 13 - 15 years old in Colorado featuring Functional Reactive Programming

2010-02-11 Thread John Peterson
Western State College in Colorado has a computer camp for kids aged 13 - 15.  
Although we don't use Haskell (it's Python on the inside) the underlying engine 
is Functional Reactive Programming.  We use a 3-D game engine to explore more 
than just programming - we cover a lot of math and physics.  We have a very 
unique camp - every day includes 3 - 4 hours of recreation in the area: 
rafting, rock climbing, kayaking, mountain biking.

Our website is at 
http://western.edu/academics/computerscience/computer-camp.html

The camp is the last week of June.  See the website for further details.

We're trying to get the software in releasable form - should be ready to go in 
a few months.

This is the fourth year of our camp and FRP has been an ideal way to introduce 
novices to computing.

John

(PS - all recreational activities at our camp are approved by Simon PJ - 
http://haskell.org/haskellwiki/Simon_Has_Fun)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] parallel and distributed haskell?

2010-02-11 Thread Bernie Pope
On 17 December 2009 06:21, Scott A. Waterman tswater...@gmail.com wrote:

 I feel there is quite a bit of latent interest in the subject here,
 but relatively little active development (compared to erlang, clojure, etc.)
 Can anyone involved give a quick overview (or pointers to one)?
 It would be good to hear what directions people are taking, and why,
 and where it's going.

I've recently moved into HPC and am now quite interested in using
Haskell on large clusters.

My first goal was to get hMPI working (HaskellMPI). The original
version appears to be from Michael Weber:

   http://www.foldr.org/~michaelw/hmpi/

which was followed up by Hal Daume III:

   http://hal3.name/newhmpi.tar.gz

It doesn't appear to be cabalised.

I wonder if anyone else has been using it?

Cheers,
Bernie.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Time for a San Francisco Hackathon?

2010-02-11 Thread Bryan O'Sullivan
I'm thinking it might be a good idea to organise a Haskell Hackathon for
people in (and who'd like to visit) the Bay Area. The tentative date I have
in mind is the first weekend in May (conveniently May 1). If you'd be
interested in attending or helping to organise, please let me know.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HDBC convert [SqlValue] without muchos boilerplate

2010-02-11 Thread Iain Barnett
On 11 Feb 2010, at 10:05, Vasyl Pasternak wrote:

 
 But fromSql function could convert everything to String, so you never
 need to use `show`, just simply write
 
 convrow2 :: [SqlValue] - String
 convrow2 (x:xs) = foldl (\i j - i ++  |  ++ (fromSql j)) (fromSql x) xs

 But, IMO, this is more readable version of your function:
 
 convrow2' :: [SqlValue] - String
 convrow2' = unwords . intersperse | . map fromSql
 

On 11 Feb 2010, at 10:06, Miguel Mitrofanov wrote:
 
 What if you just omit the show function? fromSql seems to be able to 
 convert almost anything to String.


Ok, thanks, that's a big help. I'm really glad to get rid off all that extra 
cruft I had there. I'd forgotten about intersperse, too. The new code works 
fine.


Regards,
Iain___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Time for a San Francisco Hackathon?

2010-02-11 Thread Don Stewart
bos:
 I'm thinking it might be a good idea to organise a Haskell Hackathon for 
 people
 in (and who'd like to visit) the Bay Area. The tentative date I have in mind 
 is
 the first weekend in May (conveniently May 1). If you'd be interested in
 attending or helping to organise, please let me know.

Interesting. We were planning a PDX one for late April.

Maybe we can coordinate and have a unified West Coast hackathon...

Video + IRC can keep the hackers in sync :)

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How many Haskell Engineer I/II/IIIs are there?

2010-02-11 Thread Jason Dusek
  I looked at generating C for AVR with JHC. I wanted to see what
  this program became:


http://github.com/solidsnack/trippy-waves/blob/99ad424a3ed4a21ff6f6a662293d6d21e92d6611/using-jhc/RGB.hs

  The program is relatively simple. It doesn't work, of course (I
  never did get the right FFI bindings figured out) but the
  generated C is suggestive.


http://github.com/solidsnack/trippy-waves/blob/99ad424a3ed4a21ff6f6a662293d6d21e92d6611/using-jhc/hs.out_code.c

  The generated `main' is very plain:

static void A_STD
ftheMain(void)
{
jhc_function_inc();
uintptr_t v10 = ((uintptr_t)DDRB());
*((uint8_t *)(v10)) = 23;
uintptr_t v18 = ((uintptr_t)PORTB());
return *((uint8_t *)(v18)) = 23;
}

  This is a simple, literal translation of my foreign calls. To
  all appearances, the runtime is entirely bypassed. The
  function `jhc_function_inc()' is a performance counter, set to
  no-op for non-profiling builds as far as I can tell.

  It also doesn't compile but that's because I can't figure out
  how to declare pointers in the FFI; and if I could, then I'd
  have to go through by hand and pull out includes for things
  that aren't available for AVR programming -- locale.h and
  such.

--
Jason Dusek
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Dungeons of Wor - a largish FRP example and a fun game, all in one!

2010-02-11 Thread Patai Gergely
 1) This is missing the obligatory youtube video.
That's usually handled by dons. ;)

Gergely

-- 
http://www.fastmail.fm - Email service worth paying for. Try it for free

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN: Dungeons of Wor - a largish FRP example and a fun game, all in one!

2010-02-11 Thread Patai Gergely
 Exciting! But on a mac, I can't get the window to become focussed
 or accept input. Tips ?
I don't have a Mac, but I heard that GLFW is not without problems there,
so maybe it's the culprit this time too. Do other GLFW apps work on your
machine?

Gergely

-- 
http://www.fastmail.fm - IMAP accessible web-mail

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe