Re: [Haskell-cafe] How to use unsafePerformIO properly (safely?)

2010-03-31 Thread Emil Axelsson

In Feldspar's module for observable sharing [1] I use the following

{-# OPTIONS_GHC -O0 #-}

which I assumed would take care of the steps required for 
unsafePerformIO. Could someone please tell if this assumption is correct?


(Of course, observable sharing is not safe regardless, but that's beside 
the point :) )


/ Emil

[1] 
http://hackage.haskell.org/packages/archive/feldspar-language/0.2/doc/html/src/Feldspar-Core-Ref.html




Ivan Miljenovic skrev:

I use the dreaded unsafePerformIO for a few functions in my graphviz
library ( 
http://hackage.haskell.org/packages/archive/graphviz/2999.8.0.0/doc/html/src/Data-GraphViz.html
).  However, a few months ago someone informed me that the
documentation for unsafePerformIO had some steps that should be
followed whenever it's used:
http://www.haskell.org/ghc/docs/latest/html/libraries/base-4.2.0.0/System-IO-Unsafe.html
.

Looking through this documentation, I'm unsure on how to deal with the
last two bullet points (adding NOINLINE pragmas is easy).  The code
doesn't combine IO actions, etc. and I don't deal with mutable
variables, so do I have to worry about them?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data Structures GSoC

2010-03-31 Thread Nathan Hunter
Hello.

I am hoping to take on the Data Structures project proposed two years ago by
Don Stewart herehttp://hackage.haskell.org/trac/summer-of-code/ticket/1549,
this summer.
Before I write up my proposal to Google, I wanted to gauge the reaction of
the Haskell community to this project.
Particularly:

-What Data Structures in the current libraries are in most dire need of
improvement?
-How necessary do you think a Containers Library revision is?
-Should I attempt to build on the work Jamie Brandon did with Map as
generalised tries, or is that beyond the scope of this project

I am very excited to work with functional data structures, and I would
greatly appreciate any input.

-Nathan Hunter
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data Structures GSoC

2010-03-31 Thread Achim Schneider
Nathan Hunter enfer...@gmail.com wrote:

 -What Data Structures in the current libraries are in most dire need
 of improvement?
 -How necessary do you think a Containers Library revision is?
 -Should I attempt to build on the work Jamie Brandon did with Map as
 generalised tries, or is that beyond the scope of this project

As I see it, the most dire need is a unified interface to everything,
as well as instance selection, think

type instance MapOf Int = IntMap
type instance MapOf (a,b,c) =Tuple3Map(MapOf a)(MapOf b)(MapOf c) a b c

(gmap is very, very handy in every other aspect)


We have a lot of useful interfaces (e.g. ListLike, Edison), but they
don't seem to enjoy wide-spread popularity.

There's some lack when it comes to hashtables (I'd like to see a
Haskell implementation of perfect hashing, as that's sometimes jolly
useful) as well as cache-obliviousness (which is a can of worms:
Ensuring data layout requires using mutable Arrays, which sucks, so
what we actually need is a gc/allocator that can be told to allocate in
van Emde Boas layout), but in general, the implementation side is fine.

I would also like to be able to match on the left head of Data.Sequence
in the same way I can match on Lists (see e.g. [1]), but I guess
there'd have to me more commutity backing for that to become reality.


[1] http://hackage.haskell.org/trac/ghc/ticket/3583

-- 
(c) this sig last receiving data processing entity. Inspect headers
for copyright history. All rights reserved. Copying, hiring, renting,
performance and/or quoting of this signature prohibited.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] benchmarking pure code

2010-03-31 Thread Paul Brauner
Hello,

I'm writing a library for dealing with binders and I want to benchmark
it against DeBruijn, Locally Nameless, HOAS, etc.

One on my benchmark consists in 

  1. generating a big term \x.t
  2. substituting u fox in t

The part I want to benchmark is 2. In particular I would like that:

 a. \x.t is already evaluated when I run 2 (I don't want to measure the
performances of the generator)
 b. the action of substituting u for x in t were measured as if I had to
fully evaluate the result (by printing the resulting term for
instance).

After looking at what was available on hackage, i set my mind on
strictbench, which basically calls (rnf x `seq` print ) and then uses
benchpress to measure the pure computation x.

Since I wanted (a), my strategy was (schematically):

  let t = genterm
  rnf t `seq` print 
  bench (subst u t)

I got numbers I didn't expect so I ran the following program:

  let t = genterm
  print t
  bench (subst u t)

and then I got other numbers! Which were closer to what I think they
should be, so I may be happy with them, but all of this seems to
indicate that rnf doesn't behave as intended.

Then I did something different: I wrote two programs. One that prints the
result of (subst u t):

  let t = genterm
  let x = (subst u t)
  print x
  bench (print x)

recorded the numbers of that one and then ran the program:

  let t = genterm
  bench (print (subst u t))

got the numbers, and substracted the first ones to them.

By doing so, I'm sure that I get realistic numbers at least:
since I print the whole resulting term, I've got a visual proof
that it's been evaluated. But this is not very satisfactory.
Does anyone have an idea why calling rnf before the bench 
doesn't seem to cache the result as calling show does?
(my instances of NFData follow the scheme described in strictbench
documentation). If not, do you think that measuring (computation +
pretty printing time - pretty printing time) is ok?

Regards,
Paul
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] benchmarking pure code

2010-03-31 Thread Edward Z. Yang
Excerpts from Paul Brauner's message of Wed Mar 31 03:17:02 -0400 2010:
 The part I want to benchmark is 2. In particular I would like that:
 
  a. \x.t is already evaluated when I run 2 (I don't want to measure the
 performances of the generator)
  b. the action of substituting u for x in t were measured as if I had to
 fully evaluate the result (by printing the resulting term for
 instance).

Criterion uses Control.DeepSeq; perhaps this is what you are looking for?

Cheers,
Edward
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] benchmarking pure code

2010-03-31 Thread Bas van Dijk
On Wed, Mar 31, 2010 at 9:17 AM, Paul Brauner paul.brau...@loria.fr wrote:
 Does anyone have an idea why calling rnf before the bench
 doesn't seem to cache the result as calling show does?
 (my instances of NFData follow the scheme described in strictbench
 documentation).

Is it possible you could show us your term type and your NFData instance?

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Data Structures GSoC

2010-03-31 Thread Roman Leshchinskiy
On 31/03/2010, at 18:14, Achim Schneider wrote:

 We have a lot of useful interfaces (e.g. ListLike, Edison), but they
 don't seem to enjoy wide-spread popularity.

Perhaps that's an indication that we need different interfaces? IMO, huge 
classes which generalise every useful function we can think of just isn't the 
right approach. We need small interfaces between containers and algorithms. In 
fact, the situation is perhaps somewhat similar to C++ where by providing 
exactly that the STL has been able to replace OO-style collection libraries 
which never really worked all that well.

Roman


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data Structures GSoC

2010-03-31 Thread Darryn Reid
Nathan,

For what it is worth: I'd propose that Data.HashTable needs to be
replaced; it appears to me having played around with it and found it
wanting that its limitations are pretty common knowledge in the Haskell
community. (I'm sure most people on this list would already know much
more about the limitations of the existing HashTable library than do I,
for am only new to Haskell, so sorry if my suggestion is an
eyeball-roller).

Darryn.

On Wed, 2010-03-31 at 00:00 -0700, Nathan Hunter wrote:
 Hello. 
 
 
 I am hoping to take on the Data Structures project proposed two years
 ago by Don Stewart here, this summer.
 Before I write up my proposal to Google, I wanted to gauge the
 reaction of the Haskell community to this project.
 Particularly:
 -What Data Structures in the current libraries are in most
 dire need of improvement?
 -How necessary do you think a Containers Library revision is?
 -Should I attempt to build on the work Jamie Brandon did with
 Map as generalised tries, or is that beyond the scope of this
 project
 I am very excited to work with functional data structures, and I would
 greatly appreciate any input.
 
 
 -Nathan Hunter
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why doesn't time allow diff of localtimes ?

2010-03-31 Thread Joachim Breitner
Hi,

Quoting bri...@aracnet.com:
  which is a variation of the question, why can't I compare
localtimes ?
  or am I missing something in Time (yet again).

Am Mittwoch, den 31.03.2010, 01:32 -0400 schrieb wagne...@seas.upenn.edu:
 Two values of LocalTime may well be computed with respect to different  
 timezones, which makes the operation you ask for dangerous. First  
 convert to UTCTime (with localTimeToUTC), then compare.

but this leads to the question: Why can I not compare or diff ZonedTime?

Greetings,
Joachim

-- 
Joachim nomeata Breitner
  mail: m...@joachim-breitner.de | ICQ# 74513189 | GPG-Key: 4743206C
  JID: nome...@joachim-breitner.de | http://www.joachim-breitner.de/
  Debian Developer: nome...@debian.org


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More Language.C work for Google's Summer of Code

2010-03-31 Thread Malcolm Wallace

Malcolm would have to attest to how complete it is w.r.t. say, gcc's
preprocessor,


cpphs is intended to be as faithful to the CPP standard as possible,  
whilst still retaining the extra flexibility we want in a non-C  
environment, e.g. retaining the operator symbols //, /*, and */.  If  
the behaviour of cpphs does not match gcc -E, then it is either a bug  
(please report it) or an intentional feature.


Real CPP is rather horribly defined as a lexical analyser for C, so  
has a builtin notion of identifier, operator, etc, which is not so  
useful for all the other settings in which we just want to use  
conditional inclusion or macros.  Also, CPP fully intermingles  
conditionals, file inclusion, and macro expansion, whereas cpphs makes  
a strenuous effort to separate those things into logical phases: first  
the conditionals and inclusions, then macro expansion.  This  
separation makes it possible to run only one or other of the phases,  
which can occasionally be useful.


 One concern is that Language.C is BSD-licensed (and it would be  
nice to keep it that way), and cpphs is LGPL. However, if cpphs  
remained a separate program, producing C + extra stuff as output, and  
the Language.C parser understood the extra stuff, this could  
accomplish what I'm interested in.


As for licensing, yes, cpphs as a standalone binary, is GPL.  The  
library version is LGPL.  One misconception is that a BSD-licensed  
library cannot use an LGPL'd library - of course it can.  You just  
need to ensure that everyone can update the LGPL'd part if they wish.   
And as I always state for all of my tools, if the licence is a problem  
for any user, contact me to negotiate terms.  I'm perfectly willing to  
allow commercial distribution with exemption from some of the GPL  
obligations.  (And I note in passing that other alternatives like gcc  
are also GPL'd.)


Regards,
Malcolm
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] benchmarking pure code

2010-03-31 Thread Paul Brauner
Hello,

actually I don't know if I can. I totally wouldn't mind but this is
mainly my co-author work and I don't know if he would (I suppose not but
since he is sleeping right now I can't check). However let's assume it's
a deBruijn representation for instance, I can tell you the scheme I
used:

data Term = Lam Term | App Term Term | Var Int

instance NFData where
  rnf (Lam t) = rnf t
  rnf (App t1 t2) = rnf t1 `seq` rnf t2
  rnf (Var x) = rnf x

the actual datatype doesn't have fancy stuff like higher-order
types for constructors, it's really similar. The only difference
is that it is a GADT, but this souldn't change anything right?

Did I make some mistake in instancing NFData ?

Regards,
Paul

On Wed, Mar 31, 2010 at 09:32:29AM +0200, Bas van Dijk wrote:
 On Wed, Mar 31, 2010 at 9:17 AM, Paul Brauner paul.brau...@loria.fr wrote:
  Does anyone have an idea why calling rnf before the bench
  doesn't seem to cache the result as calling show does?
  (my instances of NFData follow the scheme described in strictbench
  documentation).
 
 Is it possible you could show us your term type and your NFData instance?
 
 regards,
 
 Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: ANN: data-category, restricted categories

2010-03-31 Thread Dan Doel
On Tuesday 30 March 2010 4:34:10 pm Ashley Yakeley wrote:
 Worse than that, if bottom is a value, then Hask is not a category! Note
 that while undefined is bottom, (id . undefined) and (undefined . id)
 are not.

Hask can be a category even if bottom is a value, with slight modification. 
That specific problem can be overcome by eliminating seq (and associated 
stuff; strict fields, bang patterns, ...), because it is the only thing in the 
language that can distinguish between the extensionally equal:

  undefined
  \_ - undefined

the latter being what you get from id . undefined and undefined . id.

Bottom, or more specifically, lifted types, tend to ruin other nice 
categorical constructions, though. Lifted sums are not Hask coproducts, lifted 
products are not Hask products, the empty type is terminal, rather than 
initial, and () isn't terminal. But, if we ignore bottoms, these deficiencies 
disappear.

See, of course, Fast and Loose Reasoning is Morally Correct. *

-- Dan

[*] http://www.comlab.ox.ac.uk/jeremy.gibbons/publications/fast+loose.pdf
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Numeric.LinearProgramming - confusion in constructors for bounds

2010-03-31 Thread Ozgur Akgun
Hi everyone and Alberto,

Numeric.LinearProgramming[1] provides a very nice interface for solving LP
optimisation problems, and the well-known simplex algorithm itself. I must
say I quite liked the interface it provides, simple yet sufficient.

But, to my understanding, there is a confusion in the constructor name
(symbols actually) for constraints. In LP, one needs to write constraints in
the form of ==, =, or = only. You *cannot *write a constraint using strict
inequalities. The algorithm has nothing wrong, but I guess it would be
better to have constructor symbols right. See [2]

If this is a design choice, I think it should explicitly be stated.

Regards,

[1] http://hackage.haskell.org/package/hmatrix-glpk
[2]
http://hackage.haskell.org/packages/archive/hmatrix-glpk/0.1.0/doc/html/Numeric-LinearProgramming.html#t%3ABound

-- 
Ozgur Akgun
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSoC and Machine learning

2010-03-31 Thread Alp Mestanogullari
Note that, if any student is interested, the Haskell Neural Network library
[1] is being rewritten from scratch. We (Thomas Bereknyi and I) are
discussing many core data structure alternatives, with some suggestions from
Edward Kmett. There may even be some room for a rewrite or update of fgl,
possibly with an alternative conception, to fit well HNN. I am definitely
not sure if this is worth a GSoC and if the community would benefit that
much from such a work, but it's there.

[1] http://haskell.org/haskellwiki/HNN

On Tue, Mar 30, 2010 at 1:55 PM, Grzegorz C pite...@gmail.com wrote:


 Hi,


 Ketil Malde-5 wrote:
 
  Once upon a time, I proposed a GSoC project for a machine learning
  library.
 
  I still get some email from prospective students about this, whom I
  discourage as best I can by saying I don't have the time or interest to
  pursue it, and that chances aren't so great since you guys tend to
  prefer language-related stuff instead of application-related stuff.
 
  But if anybody disagrees with my sentiments and is willing to mentor
  this, there are some smart students looking for an opportunity.  I'd be
  happy to forward any requests.
 

 I don't know whether this is a good idea for a GSoC project, but I would
 certainly welcome such a library. I am using Haskell a bit for statistical
 NLP: in my experience currently Haskell is excellent for the components
 which deal with data preprocessing and feature extraction, but when it
 comes
 to implementing the core training algorithms and running them on large data
 sets, it's easy to get very poor performance and/or unexpected stack
 overflows. So if a library could provide some well-tuned and tested
 building
 blocks for implementing the performance critical parts of machine learning
 algorithms, it would improve the coding experience in a major way.

 Best,
 --
 Grzegorz

 --
 View this message in context:
 http://old.nabble.com/my-gsoc-project-topic-tp28068970p28081419.html
 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
Alp Mestanogullari
http://alpmestan.wordpress.com/
http://alp.developpez.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Implementation of Functional Languages - a tutorial

2010-03-31 Thread C K Kashyap
Hi Everybody,
I've started reading SPJ's book - When I tried to execute some sample code
in miranda, I found that Miranda does not seem to recognize things like

import Utils
or
module Langauge where ...

Has someone created a clean compilable miranda source out of this book?

-- 
Regards,
Kashyap
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSoC and Machine learning

2010-03-31 Thread Mihai Maruseac
On Wed, Mar 31, 2010 at 1:19 PM, Alp Mestanogullari a...@mestan.fr wrote:
 Note that, if any student is interested, the Haskell Neural Network library
 [1] is being rewritten from scratch. We (Thomas Bereknyi and I) are
 discussing many core data structure alternatives, with some suggestions from
 Edward Kmett. There may even be some room for a rewrite or update of fgl,
 possibly with an alternative conception, to fit well HNN. I am definitely
 not sure if this is worth a GSoC and if the community would benefit that
 much from such a work, but it's there.
 [1] http://haskell.org/haskellwiki/HNN

Well, I'd like to tie two of my favourite things together. I'm using
neural nets here and there (not for very big tasks though, yet) and I
intended to use them in haskell too. The code from [0] was intended to
become one day useful for a project on neural nets in Haskell.

I would be interested in this project if it will be accepted and there
would be mentors.

[0]: http://pgraycode.wordpress.com/2010/01/25/a-general-network-module/

-- 
Mihai Maruseac
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Seeking advice about monadic traversal functions

2010-03-31 Thread Heinrich Apfelmus
Darryn Reid wrote:

 I've coded a (fairly) general rewriting traversal - I suspect the
 approach might be generalisable to all tree-like types, but this doesn't
 concern me much right now. My purpose is for building theorem provers
 for a number of logics, separating the overall control mechanism from
 the specific rules for each logic.
 
 The beauty of Haskell here is in being able to abstract the traversal
 away from the specific reduction rules and languages of the different
 logics. I have paired down the code here to numbers rather than modal
 formulas for the sake of clarity and simplicity. My question is
 two-fold:
 1. Does my representation of the traversal seem good or would it be
 better expressed differently? If so, I'd appreciate the thoughts of more
 experienced Haskell users.

Looks fine to me, though I have no experience with tree
rewriting. :) I'd probably write it like this:

 data Tableau a =   Nil   -- End of a path
  | Single a (Tableau a)  -- Conjunction
  | Fork (Tableau a) (Tableau a)  -- Disjunction
  deriving (Eq, Show)
 data Action = Cut | Next
 type Rewriter m a = Tableau a - m (Tableau a, Action)

rewrite :: (Monad m) = Rewriter m a - Tableau a - m (Tableau a)
rewrite f t = f t = (uncurry . flip) go
where
go Cut  t = return t
go Next Nil   = return Nil
go Next (Single x t1) = liftM (Single x) (rewrite f t1)
go Next (Fork t1 t2 ) = liftM2 Fork (rewrite f t1)
(rewrite f t2)

In particular,  liftM  and  liftM2  make it apparent that we're just
wrapping the result in a constructor.


In case you want more flexibility in moving from children to parents,
you may want to have a look at zippers

  http://en.wikibooks.org/wiki/Haskell/Zippers

 2. I cannot really see how to easily extend this to a queue-based
 breadth-first traversal, which would give me fairness. I'm sure others
 must have a good idea of how to do what I'm doing here except in
 breadth-first order; I'd appreciate it very much if someone could show
 me how to make a second breadth-first version.

This is more tricky than I thought! Just listing the nodes in
breadth-first order is straightforward, but the problem is that you
also want to build the result tree. Depth-first search follows the tree
structure more closely, so building the result was no problem.

The solution is to solve the easy problem of listing all nodes
in breadth-first order first, because it turns out that it's possible to
reconstruct the result tree from such a list! In other words, it's
possible to run breadth-first search in reverse, building the tree from
a list of nodes.

How exactly does this work? If you think about it, the analogous problem
for depth-first search is not too difficult, you just walk through the
list of nodes and build a stack; pushing  Nil  nodes and combining the
top two items when encountering  Fork  nodes. So, for solving the
breadth-first version, the idea is to build a queue instead of a stack.

The details of that are a bit tricky of course, you have to be careful
when to push what on the queue. But why bother being careful if we have
Haskell? I thus present the haphazardly named:


  Lambda Fu, form 132 - linear serpent inverse

The idea is to formulate breadth-first search in a way that is *trivial*
to invert, and the key ingredient to that is a *linear* function, i.e.
one that never discards or duplicates data, but only shuffles it around.
Here is what I have in mind:

{-# LANGUAGE ViewPatterns #-}
import Data.Sequence as Seq  -- this will be our queue type
import Data.List as List

type Node a  = Tableau a -- just a node without children (ugly type)
type State a = ([Action],[(Action,Node a)], Seq (Tableau a))
queue (_,_,q) = q
nodes (_,l,_) = l

analyze :: State a - State a
analyze (Cut :xs, ts, viewl - t   : q) =
(xs, (Cut , t ) : ts, q )
analyze (Next:xs, ts, viewl - Nil : q) =
(xs, (Next, Nil   ) : ts, q )
analyze (Next:xs, ts, viewl - Single x t1 : q) =
(xs, (Next, Single x u) : ts, q | t1 )
analyze (Next:xs, ts, viewl - Fork t1 t2  : q) =
(xs, (Next, Fork u u  ) : ts, (q | t1) | t2 )
u = Nil -- or undefined

bfs xs t = nodes $
until (Seq.null . queue) analyze (xs,[],singleton t)

So,  bfs  just applies  analyze  repeatedly on a suitable state which
includes the queue, the list of nodes in breadth-first order and a
stream of actions. (This is a simplified example, you will calculate the
actions on the fly of course.) To be able to pattern match on the queue,
I have used GHC's ViewPattern extension.

It should be clear that  analyze  performs exactly one step of a
breadth-first traversal.

Now, the key point is that  analyze  is *linear*, it never discards or
duplicates any of the input variables, it just 

Re: [Haskell-cafe] Implementation of Functional Languages - a tutorial

2010-03-31 Thread minh thu
2010/3/31 C K Kashyap ckkash...@gmail.com:
 Hi Everybody,
 I've started reading SPJ's book - When I tried to execute some sample code
 in miranda, I found that Miranda does not seem to recognize things like
 import Utils
 or
 module Langauge where ...
 Has someone created a clean compilable miranda source out of this book?

Hi,

If I remember correctly, the version of the book on the net contains
Haskell code, even if the text is about Miranda. The example you give
above is definitely Haskell.

Cheers,
Thu
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Implementation of Functional Languages - a tutorial

2010-03-31 Thread C K Kashyap
Great ... thanks Thu ...

Regards,
Kashyap

On Wed, Mar 31, 2010 at 4:15 PM, minh thu not...@gmail.com wrote:

 2010/3/31 C K Kashyap ckkash...@gmail.com:
  Hi Everybody,
  I've started reading SPJ's book - When I tried to execute some sample
 code
  in miranda, I found that Miranda does not seem to recognize things like
  import Utils
  or
  module Langauge where ...
  Has someone created a clean compilable miranda source out of this book?

 Hi,

 If I remember correctly, the version of the book on the net contains
 Haskell code, even if the text is about Miranda. The example you give
 above is definitely Haskell.

 Cheers,
 Thu




-- 
Regards,
Kashyap
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] benchmarking pure code

2010-03-31 Thread Bas van Dijk
On Wed, Mar 31, 2010 at 11:06 AM, Paul Brauner paul.brau...@loria.fr wrote:
 data Term = Lam Term | App Term Term | Var Int

 instance NFData where
  rnf (Lam t)     = rnf t
  rnf (App t1 t2) = rnf t1 `seq` rnf t2
  rnf (Var x)     = rnf x

 the actual datatype doesn't have fancy stuff like higher-order
 types for constructors, it's really similar. The only difference
 is that it is a GADT, but this souldn't change anything right?

 Did I make some mistake in instancing NFData ?

No, your NFData instance is correct. You first pattern match on the
term followed by recursively calling rnf on the sub-terms. This will
correctly force the entire term.

Maybe you could try using criterion[1] for your benchmark and see if
that makes any difference. Something like:

{-# LANGUAGE BangPatterns #-}

import Criterion.Main

main :: IO ()
main = let !t = genterm in defaultMain [bench subst $ nf (subst u) t]

regards,

Bas

[1] http://hackage.haskell.org/package/criterion
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GSoC and Machine learning

2010-03-31 Thread Alp Mestanogullari
Well, you can join #hnn or #haskell-soc to discuss that with us. But don't
put too much hope on that, I'm quite sure it isn't GSoC worthy. OTOH, any
contribution is always welcome heh.

On Wed, Mar 31, 2010 at 12:39 PM, Mihai Maruseac
mihai.marus...@gmail.comwrote:

 On Wed, Mar 31, 2010 at 1:19 PM, Alp Mestanogullari a...@mestan.fr wrote:
  Note that, if any student is interested, the Haskell Neural Network
 library
  [1] is being rewritten from scratch. We (Thomas Bereknyi and I) are
  discussing many core data structure alternatives, with some suggestions
 from
  Edward Kmett. There may even be some room for a rewrite or update of fgl,
  possibly with an alternative conception, to fit well HNN. I am definitely
  not sure if this is worth a GSoC and if the community would benefit that
  much from such a work, but it's there.
  [1] http://haskell.org/haskellwiki/HNN

 Well, I'd like to tie two of my favourite things together. I'm using
 neural nets here and there (not for very big tasks though, yet) and I
 intended to use them in haskell too. The code from [0] was intended to
 become one day useful for a project on neural nets in Haskell.

 I would be interested in this project if it will be accepted and there
 would be mentors.

 [0]: http://pgraycode.wordpress.com/2010/01/25/a-general-network-module/

 --
 Mihai Maruseac




-- 
Alp Mestanogullari
http://alpmestan.wordpress.com/
http://alp.developpez.com/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell.org re-design

2010-03-31 Thread Johan Tibell
On Mon, Mar 29, 2010 at 3:44 AM, Christopher Done
chrisd...@googlemail.com wrote:
 This is a post about re-designing the whole Haskell web site.

I really like the design a lot. Here are some ideas:

- There are several news streams going on at once. Perhaps Headlines
and Events could be merged into one stream. After watching the
Hackage RSS feed every day I don't know if it's interesting enough to
put on a front page. Perhaps in a side bar which brings me to my next
suggestions.

- Multi column pages are tricky to scan! It works well in news papers
since the page height is limited but for web pages I really prefer one
main column. Perhaps the second column code be made more narrow?
Perhaps the footer content could be promoted into this second column
and have it be a more conventional right (or left) hand nav?

- The quick links seem a bit random where they now appear. :)

I'd also recommend looking at other programming languages web sites if
you haven't done so already:

http://www.ruby-lang.org/en/
http://www.python.org/
http://www.scala-lang.org/

Thanks for all your hard work!

Cheers,
Johan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] benchmarking pure code

2010-03-31 Thread Bas van Dijk
On Wed, Mar 31, 2010 at 12:57 PM, Bas van Dijk v.dijk@gmail.com wrote:
 main = let !t = genterm in defaultMain [bench subst $ nf (subst u) t]

Oops, that should be:

main = let t = genterm in rnf t `seq` defaultMain [bench subst $ nf
(subst u) t]

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Implementation of Functional Languages - a tutorial

2010-03-31 Thread C K Kashyap
Looks like some functions are left as an exercise... I'd appreciate it if
someone could share a complete code for the Core language

On Wed, Mar 31, 2010 at 4:20 PM, C K Kashyap ckkash...@gmail.com wrote:

 Great ... thanks Thu ...

 Regards,
 Kashyap


 On Wed, Mar 31, 2010 at 4:15 PM, minh thu not...@gmail.com wrote:

 2010/3/31 C K Kashyap ckkash...@gmail.com:
  Hi Everybody,
  I've started reading SPJ's book - When I tried to execute some sample
 code
  in miranda, I found that Miranda does not seem to recognize things like
  import Utils
  or
  module Langauge where ...
  Has someone created a clean compilable miranda source out of this book?

 Hi,

 If I remember correctly, the version of the book on the net contains
 Haskell code, even if the text is about Miranda. The example you give
 above is definitely Haskell.

 Cheers,
 Thu




 --
 Regards,
 Kashyap




-- 
Regards,
Kashyap
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] benchmarking pure code

2010-03-31 Thread Paul Brauner
Thank you, I will look at that. But it seems that criterion uses NFData
no?

Paul

On Wed, Mar 31, 2010 at 12:57:20PM +0200, Bas van Dijk wrote:
 On Wed, Mar 31, 2010 at 11:06 AM, Paul Brauner paul.brau...@loria.fr wrote:
  data Term = Lam Term | App Term Term | Var Int
 
  instance NFData where
   rnf (Lam t)     = rnf t
   rnf (App t1 t2) = rnf t1 `seq` rnf t2
   rnf (Var x)     = rnf x
 
  the actual datatype doesn't have fancy stuff like higher-order
  types for constructors, it's really similar. The only difference
  is that it is a GADT, but this souldn't change anything right?
 
  Did I make some mistake in instancing NFData ?
 
 No, your NFData instance is correct. You first pattern match on the
 term followed by recursively calling rnf on the sub-terms. This will
 correctly force the entire term.
 
 Maybe you could try using criterion[1] for your benchmark and see if
 that makes any difference. Something like:
 
 {-# LANGUAGE BangPatterns #-}
 
 import Criterion.Main
 
 main :: IO ()
 main = let !t = genterm in defaultMain [bench subst $ nf (subst u) t]
 
 regards,
 
 Bas
 
 [1] http://hackage.haskell.org/package/criterion
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Graph?

2010-03-31 Thread Lee Pike
Thanks, Ivan, for the note about the other alternatives and about  
possible additions to your library (and of course for the library  
itself!).


I should mention that I completely missed containers in the  
hierarchical libraries (I was just looking in the base libraries).   
Sorry about that.


Lee


On Mar 30, 2010, at 7:23 PM, Ivan Miljenovic wrote:


Sorry for the duplicate email Lee, but I somehow forgot to CC the
mailing list :s

On 31 March 2010 13:12, Lee Pike leep...@gmail.com wrote:
I'd like it if there were a Data.Graph in the base libraries with  
basic

graph-theoretic operations.  Is this something that's been discussed?


I'm kinda working on a replacement to Data.Graph that will provide
graph-theoretic operations to a variety of graph types.

For now, it appears that Graphalyze on Hackage is the most complete  
library
for graph analysis; is that right?  (I actually usually just want a  
pretty

small subset of its functionality.)


Yay, someone likes my code! :p

I've been thinking about splitting off the algorithms section of
Graphalyze for a while; maybe I should do so now... (though I was
going to merge it into the above mentioned so-far-mainly-vapourware
library...).

There are a few other alternatives:

* FGL has a variety of graph operations (but I ended up
re-implementing a lot of the ones I wanted in Graphalyze because FGL
returns lists of nodes and I wanted the resulting graphs for things
like connected components, etc.).
* The dom-lt library
* GraphSCC
* hgal (which is a really atrocious port of nauty that is extremely
inefficient; I've started work on a replacement)
* astar (which is generic for all graph types since you provide
functions on the graph as arguments)

With the exception of FGL, all of these are basically libraries that
implement one particular algorithm/operation.

--
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Graph?

2010-03-31 Thread Ivan Lazar Miljenovic
Lee Pike leep...@gmail.com writes:
 I should mention that I completely missed containers in the
 hierarchical libraries (I was just looking in the base libraries).
 Sorry about that.

Oh, I thought you had done what most of us do: seen Data.Graph in
containers and promptly dismissed it... _

(IIRC, it doesn't really have many graph operations defined there.)

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Data Structures GSoC

2010-03-31 Thread John Lato
 From: Roman Leshchinskiy r...@cse.unsw.edu.au

 On 31/03/2010, at 18:14, Achim Schneider wrote:

 We have a lot of useful interfaces (e.g. ListLike, Edison), but they
 don't seem to enjoy wide-spread popularity.

 Perhaps that's an indication that we need different interfaces? IMO, huge 
 classes which generalise every useful function we can think of just isn't the 
 right approach. We need small interfaces between containers and algorithms. 
 In fact, the situation is perhaps somewhat similar to C++ where by providing 
 exactly that the STL has been able to replace OO-style collection libraries 
 which never really worked all that well.

Agreed.  There should be a hierarchy with multiple interfaces, e.g.
Collection, List, Map, Set, etc.  I can't speak for Edison, but
ListLike only implements the List, and doesn't provide an appropriate
base class.  I also agree that the STL (as well as the generic
collections in C#/ASP.Net) seem to take the best approach here.

That being said, I also agree with Darryn Reid that Data.HashTable
could use some work as a higher priority.

John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Graph?

2010-03-31 Thread Lee Pike

Oh, I thought you had done what most of us do: seen Data.Graph in
containers and promptly dismissed it... _

(IIRC, it doesn't really have many graph operations defined there.)


Yes, you're right---I just wanted to acknowledge that I'd missed that  
there was *something* there...


Lee
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Are there any female Haskellers?

2010-03-31 Thread Tristan Seligmann
On Mon, Mar 29, 2010 at 4:48 PM, Steve Schafer st...@fenestra.com wrote:
 On Sun, 28 Mar 2010 20:38:49 -0700, you wrote:

  * The difference between genders is smaller than the difference between
individuals

 If only people would understand and accept the near-universality of
 this:

 The difference between any group you want to discriminate against and
 any group you want to discriminate in favor of is smaller than the
 difference between individuals.

Oh, come now, near-universality? There are millions of obvious
counter-examples. (For example, the difference in running ability
between people who are paralyzed from the neck down, and people who
aren't, should be readily apparent.) I like the spirit of your idea,
but let's be realistic here.
-- 
mithrandi, i Ainil en-Balandor, a faer Ambar
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: Hakyll-2.0

2010-03-31 Thread Jasper Van der Jeugt
Hello,

On this spring day I would like to announce the 2.0 release of
Hakyll[1], the static site generator. It is a rewrite, changes the API
for the better and introduces some new features. A brief changelog:

- Rewrite of the codebase to a clean, arrow based API.
- Pagination was added.
- Built-in functions to generate RSS and Atom.
- Many bugfixes.
- New tutorials added.

Of course, all feedback is welcome.

Kind regards,
Jasper Van der Jeugt

[1]: http://jaspervdj.be/hakyll
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] cabal through PHPProxy

2010-03-31 Thread Dupont Corentin
Hello,
i'm using PHPProxy to go on the internet (www.phpproxy.fr).
How can i tell cabal to use this?

Cheers,
Corentin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Numeric.LinearProgramming - confusion in constructors for bounds

2010-03-31 Thread Alberto Ruiz

Hi Ozgur,

You are right, the operators are misleading. I will change them to 
:=: and :=:. And perhaps the symbol :: for the interval bound 
should also be improved...


Thanks for your suggestion!
Alberto

Ozgur Akgun wrote:

Hi everyone and Alberto,

Numeric.LinearProgramming[1] provides a very nice interface for solving 
LP optimisation problems, and the well-known simplex algorithm itself. I 
must say I quite liked the interface it provides, simple yet sufficient.


But, to my understanding, there is a confusion in the constructor name 
(symbols actually) for constraints. In LP, one needs to write 
constraints in the form of ==, =, or = only. You /cannot /write a 
constraint using strict inequalities. The algorithm has nothing wrong, 
but I guess it would be better to have constructor symbols right. See [2]


If this is a design choice, I think it should explicitly be stated.

Regards,

[1] http://hackage.haskell.org/package/hmatrix-glpk
[2] 
http://hackage.haskell.org/packages/archive/hmatrix-glpk/0.1.0/doc/html/Numeric-LinearProgramming.html#t%3ABound 



--
Ozgur Akgun


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Numeric.LinearProgramming - confusion in constructors for bounds

2010-03-31 Thread Ozgur Akgun
You are very welcome :)

What about not an operator but a regular constructor for the interval thing?
Something like: Between Double Double

Nevertheless, I think :: is not bad at all. You can leave it as it is.

Best,

On 31 March 2010 14:57, Alberto Ruiz ar...@um.es wrote:

 Hi Ozgur,

 You are right, the operators are misleading. I will change them to :=:
 and :=:. And perhaps the symbol :: for the interval bound should also
 be improved...

 Thanks for your suggestion!
 Alberto


 Ozgur Akgun wrote:

 Hi everyone and Alberto,

 Numeric.LinearProgramming[1] provides a very nice interface for solving LP
 optimisation problems, and the well-known simplex algorithm itself. I must
 say I quite liked the interface it provides, simple yet sufficient.

 But, to my understanding, there is a confusion in the constructor name
 (symbols actually) for constraints. In LP, one needs to write constraints in
 the form of ==, =, or = only. You /cannot /write a constraint using strict
 inequalities. The algorithm has nothing wrong, but I guess it would be
 better to have constructor symbols right. See [2]

 If this is a design choice, I think it should explicitly be stated.

 Regards,

 [1] http://hackage.haskell.org/package/hmatrix-glpk
 [2]
 http://hackage.haskell.org/packages/archive/hmatrix-glpk/0.1.0/doc/html/Numeric-LinearProgramming.html#t%3ABound

 --
 Ozgur Akgun





-- 
Ozgur Akgun
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GSOC Haskell Project

2010-03-31 Thread Simon Marlow

On 30/03/2010 20:57, Mihai Maruseac wrote:


I'd like to introduce my idea for the Haskell GSOC of this year. In
fact, you already know about it, since I've talked about it here on
the haskell-cafe, on my blog and on reddit (even on #haskell one day).

Basically, what I'm trying to do is a new debugger for Haskell, one
that would be very intuitive for beginners, a graphical one. I've
given some examples and more details on my blog [0], [1], also linked
on reditt and other places.

This is not the application, I'm posting this only to receive some
kind of feedback before writing it. I know that it seems to be a
little too ambitious but I do think that I can divide the work into
sessions and finish what I'll start this summer during the next year
and following.

[0]: http://pgraycode.wordpress.com/2010/03/20/haskell-project-idea/
[1]: http://pgraycode.wordpress.com/2010/03/24/visual-haskell-debugger-part-2/

Thanks for your attention,


My concerns would be:

 - it doesn't look like it would scale very well beyond small
   examples, the graphical representation would very quickly
   get unwieldy, unless you have some heavyweight UI stuff
   to make it navigable.

 - it's too ambitious

 - have you looked around to see what kind of debugging tools
   people are asking for?  The most oft-requested feature is
   stack traces, and there's lots of scope for doing something
   there (but also many corpses littering the battlefield,
   so watch out!)

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] cabal through PHPProxy

2010-03-31 Thread Bayley, Alistair
 From: haskell-cafe-boun...@haskell.org 
 [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Dupont Corentin

 i'm using PHPProxy to go on the internet (www.phpproxy.fr).
 How can i tell cabal to use this?


I think the normal way is through setting HTTP_PROXY env var e.g. set
HTTP_PROXY=http://localhost:3128

(I have a cntlm proxy on my local machine because our corporate proxy
uses NTLM authentication, which cabal does not support.)

Alistair
*
Confidentiality Note: The information contained in this message,
and any attachments, may contain confidential and/or privileged
material. It is intended solely for the person(s) or entity to
which it is addressed. Any review, retransmission, dissemination,
or taking of any action in reliance upon this information by
persons or entities other than the intended recipient(s) is
prohibited. If you received this in error, please contact the
sender and delete the material from any computer.
*

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Benchmarking and Garbage Collection

2010-03-31 Thread Simon Marlow

On 04/03/2010 22:01, Neil Brown wrote:

Jesper Louis Andersen wrote:

On Thu, Mar 4, 2010 at 8:35 PM, Neil Brown nc...@kent.ac.uk wrote:

CML is indeed the library that has the most markedly different
behaviour.
In Haskell, the CML package manages to produce timings like this for
fairly
simple benchmarks:

%GC time 96.3% (96.0% elapsed)

I knew from reading the code that CML's implementation would do
something
like this, although I do wonder if it triggers some pathological case
in the
GC.


That result is peculiar. What are you doing to the library, and what
do you expect happens? Since I have some code invested on top of CML,
I'd like to gain a little insight if possible.


In trying to simplify my code, the added time has moved from GC time to
EXIT time (and increased!). This shift isn't too surprising -- I believe
the time is really spent trying to kill lots of threads. Here's my very
simple benchmark; the main thread repeatedly chooses between receiving
from two threads that are sending to it:


import Control.Concurrent
import Control.Concurrent.CML
import Control.Monad

main :: IO ()
main = do let numChoices = 2
cs - replicateM numChoices channel
mapM_ forkIO [replicateM_ (10 `div` numChoices) $ sync $ transmit c
() | c - cs]
replicateM_ 10 $ sync $ choose [receive c (const True) | c - cs]


Compiling with -threaded, and running with +RTS -s, I get:

INIT time 0.00s ( 0.00s elapsed)
MUT time 2.68s ( 3.56s elapsed)
GC time 1.84s ( 1.90s elapsed)
EXIT time 89.30s ( 90.71s elapsed)
Total time 93.82s ( 96.15s elapsed)


FYI, I've now fixed this in my working branch:

  INIT  time0.00s  (  0.00s elapsed)
  MUT   time0.88s  (  0.88s elapsed)
  GCtime0.85s  (  1.04s elapsed)
  EXIT  time0.05s  (  0.07s elapsed)
  Total time1.78s  (  1.97s elapsed)

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal through PHPProxy

2010-03-31 Thread Dupont Corentin
Thanks for the response.
I don't think this will work because PHPProxy is a web based proxy.
To use it, you have to type the http address you want in a field on the page.

Corentin

On 3/31/10, Bayley, Alistair alistair.bay...@invesco.com wrote:
 From: haskell-cafe-boun...@haskell.org
 [mailto:haskell-cafe-boun...@haskell.org] On Behalf Of Dupont Corentin

 i'm using PHPProxy to go on the internet (www.phpproxy.fr).
 How can i tell cabal to use this?


 I think the normal way is through setting HTTP_PROXY env var e.g. set
 HTTP_PROXY=http://localhost:3128

 (I have a cntlm proxy on my local machine because our corporate proxy
 uses NTLM authentication, which cabal does not support.)

 Alistair
 *
 Confidentiality Note: The information contained in this message,
 and any attachments, may contain confidential and/or privileged
 material. It is intended solely for the person(s) or entity to
 which it is addressed. Any review, retransmission, dissemination,
 or taking of any action in reliance upon this information by
 persons or entities other than the intended recipient(s) is
 prohibited. If you received this in error, please contact the
 sender and delete the material from any computer.
 *


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout update

2010-03-31 Thread Roman Leshchinskiy
I'm wondering... Since the DPH libraries are shipped with GHC by default are we 
allowed to use them for the shootout?

Roman

On 30/03/2010, at 19:25, Simon Marlow wrote:

 The shootout (sorry, Computer Language Benchmarks Game) recently updated to 
 GHC 6.12.1, and many of the results got worse.  Isaac Gouy has added the +RTS 
 -qg flag to partially fix it, but that turns off the parallel GC completely 
 and we know that in most cases better results can be had by leaving it on.  
 We really need to tune the flags for these benchmarks properly.
 
 http://shootout.alioth.debian.org/u64q/haskell.php
 
 It may be that we have to back off to +RTS -N3 in some cases to avoid the 
 last-core problem (http://hackage.haskell.org/trac/ghc/ticket/3553), at least 
 until 6.12.2.
 
 Any volunteers with a quad-core to take a look at these programs and optimise 
 them for 6.12.1?
 
 Cheers,
   Simon
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout update

2010-03-31 Thread Simon Marlow

On 31/03/2010 16:06, Roman Leshchinskiy wrote:

I'm wondering... Since the DPH libraries are shipped with GHC by default are we 
allowed to use them for the shootout?


I don't see why not.

*evil grin*

Simon


Roman

On 30/03/2010, at 19:25, Simon Marlow wrote:


The shootout (sorry, Computer Language Benchmarks Game) recently updated to GHC 
6.12.1, and many of the results got worse.  Isaac Gouy has added the +RTS -qg 
flag to partially fix it, but that turns off the parallel GC completely and we 
know that in most cases better results can be had by leaving it on.  We really 
need to tune the flags for these benchmarks properly.

http://shootout.alioth.debian.org/u64q/haskell.php

It may be that we have to back off to +RTS -N3 in some cases to avoid the 
last-core problem (http://hackage.haskell.org/trac/ghc/ticket/3553), at least 
until 6.12.2.

Any volunteers with a quad-core to take a look at these programs and optimise 
them for 6.12.1?

Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GSOC Haskell Project

2010-03-31 Thread Jason Dagit
On Wed, Mar 31, 2010 at 7:21 AM, Simon Marlow marlo...@gmail.com wrote:

 On 30/03/2010 20:57, Mihai Maruseac wrote:

  I'd like to introduce my idea for the Haskell GSOC of this year. In
 fact, you already know about it, since I've talked about it here on
 the haskell-cafe, on my blog and on reddit (even on #haskell one day).

 Basically, what I'm trying to do is a new debugger for Haskell, one
 that would be very intuitive for beginners, a graphical one. I've
 given some examples and more details on my blog [0], [1], also linked
 on reditt and other places.

 This is not the application, I'm posting this only to receive some
 kind of feedback before writing it. I know that it seems to be a
 little too ambitious but I do think that I can divide the work into
 sessions and finish what I'll start this summer during the next year
 and following.

 [0]: http://pgraycode.wordpress.com/2010/03/20/haskell-project-idea/
 [1]:
 http://pgraycode.wordpress.com/2010/03/24/visual-haskell-debugger-part-2/

 Thanks for your attention,


 My concerns would be:

  - it doesn't look like it would scale very well beyond small
   examples, the graphical representation would very quickly
   get unwieldy, unless you have some heavyweight UI stuff
   to make it navigable.

  - it's too ambitious

  - have you looked around to see what kind of debugging tools
   people are asking for?  The most oft-requested feature is
   stack traces, and there's lots of scope for doing something
   there (but also many corpses littering the battlefield,
   so watch out!)


I would be much more interested in seeing the foundations improved than I
would be in having nice things built on them.  In other words, I agree with
Simon that stack traces would be many times more valuable to me than
graphical representations.  Once the foundations are robust, then we can
build nice things on top of them.

Perhaps the reason you're interested in graphical representations is because
you want to help people 'visualize', or understand, the problem.  Not all
visualizations need to be graphical in the GUI sense.  It's really about
representing things in a way that helps humans reason about it.  Getting the
right information to people as they need it is probably the best place to
start.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Implementation of Functional Languages - a tutorial

2010-03-31 Thread Wei Hu
I was working through this book last year, and posted my work at
https://patch-tag.com/r/wh5a/CoreCompiler/home. It was almost
complete.

On Wed, Mar 31, 2010 at 7:12 AM, C K Kashyap ckkash...@gmail.com wrote:
 Looks like some functions are left as an exercise... I'd appreciate it if
 someone could share a complete code for the Core language

 On Wed, Mar 31, 2010 at 4:20 PM, C K Kashyap ckkash...@gmail.com wrote:

 Great ... thanks Thu ...
 Regards,
 Kashyap

 On Wed, Mar 31, 2010 at 4:15 PM, minh thu not...@gmail.com wrote:

 2010/3/31 C K Kashyap ckkash...@gmail.com:
  Hi Everybody,
  I've started reading SPJ's book - When I tried to execute some sample
  code
  in miranda, I found that Miranda does not seem to recognize things like
  import Utils
  or
  module Langauge where ...
  Has someone created a clean compilable miranda source out of this book?

 Hi,

 If I remember correctly, the version of the book on the net contains
 Haskell code, even if the text is about Miranda. The example you give
 above is definitely Haskell.

 Cheers,
 Thu



 --
 Regards,
 Kashyap



 --
 Regards,
 Kashyap

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Do I need to roll my own?

2010-03-31 Thread David Leimbach
I'm looking at iteratee as a way to replace my erroneous and really
inefficient lazy-IO-based backend for an expect like Monad DSL I've been
working for about 6 months or so now on and off.

The problem is I want something like:

expect some String
send some response

to block or perhaps timeout, depending on the environment, looking for some
String on an input Handle, and it appears that iteratee works in a very
fixed block size.  While a fixed block size is ok, if I can put back unused
bytes into the enumerator somehow (I may need to put a LOT back in some
cases, but in the common case I will not need to put any back as most
expect-like scripts typically catch the last few bytes of data sent before
the peer is blocked waiting for a response...)

Otherwise, I'm going to want to roll my own iteratee style library where I
have to say NotDone howMuchMoreIThinkINeed so I don't over consume the
input stream.

Does that even make any sense?  I'm kind of brainstorming in this email
unfortunately :-)

Dave
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Shootout update

2010-03-31 Thread Don Stewart
Certainly.

rl:
 I'm wondering... Since the DPH libraries are shipped with GHC by default are 
 we allowed to use them for the shootout?
 
 Roman
 
 On 30/03/2010, at 19:25, Simon Marlow wrote:
 
  The shootout (sorry, Computer Language Benchmarks Game) recently updated to 
  GHC 6.12.1, and many of the results got worse.  Isaac Gouy has added the 
  +RTS -qg flag to partially fix it, but that turns off the parallel GC 
  completely and we know that in most cases better results can be had by 
  leaving it on.  We really need to tune the flags for these benchmarks 
  properly.
  
  http://shootout.alioth.debian.org/u64q/haskell.php
  
  It may be that we have to back off to +RTS -N3 in some cases to avoid the 
  last-core problem (http://hackage.haskell.org/trac/ghc/ticket/3553), at 
  least until 6.12.2.
  
  Any volunteers with a quad-core to take a look at these programs and 
  optimise them for 6.12.1?
  
  Cheers,
  Simon
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
 
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Implementation of Functional Languages - a tutorial

2010-03-31 Thread C K Kashyap
Thanks Wei ... Having a working version would really help in going through
the tutorial better.


On Wed, Mar 31, 2010 at 9:32 PM, Wei Hu wei@gmail.com wrote:

 I was working through this book last year, and posted my work at
 https://patch-tag.com/r/wh5a/CoreCompiler/home. It was almost
 complete.

 On Wed, Mar 31, 2010 at 7:12 AM, C K Kashyap ckkash...@gmail.com wrote:
  Looks like some functions are left as an exercise... I'd appreciate it if
  someone could share a complete code for the Core language
 
  On Wed, Mar 31, 2010 at 4:20 PM, C K Kashyap ckkash...@gmail.com
 wrote:
 
  Great ... thanks Thu ...
  Regards,
  Kashyap
 
  On Wed, Mar 31, 2010 at 4:15 PM, minh thu not...@gmail.com wrote:
 
  2010/3/31 C K Kashyap ckkash...@gmail.com:
   Hi Everybody,
   I've started reading SPJ's book - When I tried to execute some sample
   code
   in miranda, I found that Miranda does not seem to recognize things
 like
   import Utils
   or
   module Langauge where ...
   Has someone created a clean compilable miranda source out of this
 book?
 
  Hi,
 
  If I remember correctly, the version of the book on the net contains
  Haskell code, even if the text is about Miranda. The example you give
  above is definitely Haskell.
 
  Cheers,
  Thu
 
 
 
  --
  Regards,
  Kashyap
 
 
 
  --
  Regards,
  Kashyap
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 




-- 
Regards,
Kashyap
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Announce: Haskell Platform 2010.1.0.0 (beta) release

2010-03-31 Thread Don Stewart
DekuDekuplex:
 Sorry for the late response, but just out of curiosity, are there any
 plans to provide a binary installer for either the Haskell Platform or
 GHC 6.12.1 for Mac OS X Leopard for the PowerPC CPU (as opposed to for
 the Intel x86 CPU)?  I just checked the download-related Web sites for
 both the Haskell Platform for the Mac (see
 http://hackage.haskell.org/platform/mac.html) and for GHC 6.12.1 (see
 http://www.haskell.org/ghc/download_ghc_6_12_1.html), but could find no
 relevant information.
 
 Currently, I'm using GHC 6.8.2, but this is an outdated version.
 

There is no one actively working on this, but should someone like to
build such an installer, the HP site can gladly host it.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] benchmarking pure code

2010-03-31 Thread Bryan O'Sullivan
On Wed, Mar 31, 2010 at 4:12 AM, Paul Brauner paul.brau...@loria.fr wrote:

 Thank you, I will look at that. But it seems that criterion uses NFData no?


I do not know of anything wrong with NFData. What you're seeing is much more
likely to be a bug in either the benchmarking library you're using, or in
your use of it. Most of the benchmarking frameworks on Hackage are extremely
dodgy, which was why I wrote criterion.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Graph?

2010-03-31 Thread Edward Kmett
There are a number of us over on #hnn on freenode hacking away on the
beginnings of a shiny new graph library based on some new tricks for
annotated structures. Feel free to swing by the channel.

-Edward Kmett

On Tue, Mar 30, 2010 at 10:23 PM, Ivan Miljenovic ivan.miljeno...@gmail.com
 wrote:

 Sorry for the duplicate email Lee, but I somehow forgot to CC the
 mailing list :s

 On 31 March 2010 13:12, Lee Pike leep...@gmail.com wrote:
  I'd like it if there were a Data.Graph in the base libraries with basic
  graph-theoretic operations.  Is this something that's been discussed?

 I'm kinda working on a replacement to Data.Graph that will provide
 graph-theoretic operations to a variety of graph types.

  For now, it appears that Graphalyze on Hackage is the most complete
 library
  for graph analysis; is that right?  (I actually usually just want a
 pretty
  small subset of its functionality.)

 Yay, someone likes my code! :p

 I've been thinking about splitting off the algorithms section of
 Graphalyze for a while; maybe I should do so now... (though I was
 going to merge it into the above mentioned so-far-mainly-vapourware
 library...).

 There are a few other alternatives:

 * FGL has a variety of graph operations (but I ended up
 re-implementing a lot of the ones I wanted in Graphalyze because FGL
 returns lists of nodes and I wanted the resulting graphs for things
 like connected components, etc.).
 * The dom-lt library
 * GraphSCC
 * hgal (which is a really atrocious port of nauty that is extremely
 inefficient; I've started work on a replacement)
 * astar (which is generic for all graph types since you provide
 functions on the graph as arguments)

 With the exception of FGL, all of these are basically libraries that
 implement one particular algorithm/operation.

 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: iteratee: Do I need to roll my own?

2010-03-31 Thread Valery V. Vorotyntsev
 I'm looking at iteratee as a way to replace my erroneous and really
 inefficient lazy-IO-based backend for an expect like Monad DSL I've
 been working for about 6 months or so now on and off.

 The problem is I want something like:

 expect some String
 send some response

 to block or perhaps timeout, depending on the environment, looking for
 some String on an input Handle, and it appears that iteratee works
 in a very fixed block size.

Actually, it doesn't. It works with what enumerator gives him.
In case of `enum_fd'[1] this is a fixed block, but generally this is
a ``value'' of some ``collection''[2].  And it is up to programmer to
decide of what should become a value.

  [1] http://okmij.org/ftp/Haskell/Iteratee/IterateeM.hs
  [2] http://okmij.org/ftp/papers/LL3-collections-enumerators.txt

 While a fixed block size is ok, if I can put back unused bytes into
 the enumerator somehow (I may need to put a LOT back in some cases,
 but in the common case I will not need to put any back as most
 expect-like scripts typically catch the last few bytes of data sent
 before the peer is blocked waiting for a response...)

I don't quite get this ``last few bytes'' thing. Could you explain?

I was about writing that there is no problem with putting data back to
Stream, and referring to head/peek functions...  But then I thought,
that the ``not consuming bytes from stream'' approach may not work
well in cases, when the number of bytes needed (by your function to
accept/reject some rule) exceeds the size of underlying memory buffer
(4K in current version of `iteratee' library[3]).

  [3] 
http://hackage.haskell.org/packages/archive/iteratee/0.3.4/doc/html/src/Data-Iteratee-IO-Fd.html

Do you think that abstracting to the level of _tokens_ - instead of
bytes - could help here? (Think of flex and bison.)  You know, these
enumerators/iteratees things can be layered into
_enumeratees_[1][4]... It's just an idea.

  [4] http://ianen.org/articles/understanding-iteratees/

 Otherwise, I'm going to want to roll my own iteratee style library
 where I have to say NotDone howMuchMoreIThinkINeed so I don't over
 consume the input stream.

What's the problem with over-consuming a stream? In your case?

BTW, this `NotDone' is just a ``control message'' to the chunk
producer (an enumerator):

IE_cont k (Just (GimmeThatManyBytes n))

 Does that even make any sense?  I'm kind of brainstorming in this
 email unfortunately :-)

What's the problem with brainstorming? :)

Cheers.

-- 
vvv
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data Structures GSoC

2010-03-31 Thread wren ng thornton

Nathan Hunter wrote:

Hello.

I am hoping to take on the Data Structures project proposed two years ago by
Don Stewart herehttp://hackage.haskell.org/trac/summer-of-code/ticket/1549,
this summer.
Before I write up my proposal to Google, I wanted to gauge the reaction of
the Haskell community to this project.
Particularly:

-What Data Structures in the current libraries are in most dire need of
improvement?
-How necessary do you think a Containers Library revision is?


One thing I've seen come up repeatedly is the issue of presenting a 
unified and general interface for Data.Map, Data.IntMap, and related 
things like Data.Set, bytestring-trie, pqueue, etc which intended to 
mimic their interface. That alone isn't big enough for a GSoC, but it 
would be a very nice thing to have. Every few months there's a request 
on the libraries@ list to alter, generalize, or reunify the map 
interface in some way.


** Just to be clear, I do not mean coming up with a typeclass nor doing 
things like the generalized-trie tricks, I just mean a good old 
fashioned standard API. **


There are countervailing forces for making a good API. On the one hand 
we want functions to do whatever we need, on the other hand we want the 
API to be small enough to be usable/memorable. In the bytestring-trie 
library I attempted to resolve this conflict by offering a small set of 
highly efficient ueber-combinators in the internals module, a medium 
sized set of functions for standard use in the main module, and then 
pushed most everything else off into a convenience module.


The containers library would do good to follow this sort of design. The 
Data.Map and Data.IntMap structures don't provide the necessary 
ueber-combinators, which has led to the proliferation of convenience 
functions which are more general than the standard use functions but not 
general enough to make the interface complete. Also, these generalized 
functions are implemented via code duplication rather than having a 
single implementation, which has been known to lead to cutpaste bugs 
and maintenance issues. Provided the correct ueber-combinators are 
chosen, there is no performance benefit for this code duplication either 
(so far as I've discovered with bytestring-trie).


Additionally, it'd be nice if some of the guts were made available for 
public use (e.g., the bit-twiddling tricks of Data.IntMap so I don't 
have to duplicate them in bytestring-trie).


Also it would be nice to develop a cohesive test and benchmarking suite, 
which would certainly be a large enough task for GSoC, though perhaps 
not fundable.


I would be willing to co-mentor API and algorithm design for cleaning 
the cobwebs out of the containers library. I wouldn't have the time for 
mentoring the choice of datastructures or benchmarking however.


--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: iteratee: Do I need to roll my own?

2010-03-31 Thread David Leimbach
First thanks for the reply,

On Wed, Mar 31, 2010 at 11:15 AM, Valery V. Vorotyntsev valery...@gmail.com
 wrote:

  I'm looking at iteratee as a way to replace my erroneous and really
  inefficient lazy-IO-based backend for an expect like Monad DSL I've
  been working for about 6 months or so now on and off.
 
  The problem is I want something like:
 
  expect some String
  send some response
 
  to block or perhaps timeout, depending on the environment, looking for
  some String on an input Handle, and it appears that iteratee works
  in a very fixed block size.

 Actually, it doesn't. It works with what enumerator gives him.
 In case of `enum_fd'[1] this is a fixed block, but generally this is
 a ``value'' of some ``collection''[2].  And it is up to programmer to
 decide of what should become a value.

  [1] http://okmij.org/ftp/Haskell/Iteratee/IterateeM.hs
  [2] http://okmij.org/ftp/papers/LL3-collections-enumerators.txt



  While a fixed block size is ok, if I can put back unused bytes into
  the enumerator somehow (I may need to put a LOT back in some cases,
  but in the common case I will not need to put any back as most
  expect-like scripts typically catch the last few bytes of data sent
  before the peer is blocked waiting for a response...)

 I don't quite get this ``last few bytes'' thing. Could you explain?


What I mean is let's say the stream has

abcd efg abcd efg

and then I run some kind of iteratee computation looking for abcd

and the block size was fixed to cause a read 1024 bytes, but returns as much
as it can providing it to the iteratee to deal with.  The iteratee, which I
want to implement Expect like behavior, would really only want to read up to
abcd consuming that from the input stream.  Does the iteratee get the
whole stream that was read by the enumerator, or is it supplied a single
atomic unit at a time, such as a character, in which I can halt the
consumption of the streamed data?

What I don't want to have happen is my consuming bytes from the input
Handle, only to have them ignored, as the second instance of abcd could be
important.

I'm actually not sure that was very clear :-).   I don't want to throw out
bytes by accident if that's even possible.

My discomfort with Iteratee is that most Haskell texts really want you to go
the way of lazy IO, which has led me to a good bit of trouble, and I've
never seen a very comprehensive tutorial of Iteratee available anywhere.  I
am reading the Examples that come with the hackage package though.



 I was about writing that there is no problem with putting data back to
 Stream, and referring to head/peek functions...  But then I thought,
 that the ``not consuming bytes from stream'' approach may not work
 well in cases, when the number of bytes needed (by your function to
 accept/reject some rule) exceeds the size of underlying memory buffer
 (4K in current version of `iteratee' library[3]).

  [3]
 http://hackage.haskell.org/packages/archive/iteratee/0.3.4/doc/html/src/Data-Iteratee-IO-Fd.html

 Do you think that abstracting to the level of _tokens_ - instead of
 bytes - could help here? (Think of flex and bison.)  You know, these
 enumerators/iteratees things can be layered into
 _enumeratees_[1][4]... It's just an idea.


Now that's an interesting idea, and sort of where my previous confusing
answer seemed to be heading.  I wasn't sure if the iteratee was provided a
byte, a char, or a token.  If I can tell the enumerator to only send tokens
to the iteratee, (which I'd have to define), then perhaps I can ignore the
amount consumed per read, and deal with let the enumerator deal with that
buffering issue directly.  Perhaps that's how iteratee really works anyway!


  [4] http://ianen.org/articles/understanding-iteratees/

  Otherwise, I'm going to want to roll my own iteratee style library
  where I have to say NotDone howMuchMoreIThinkINeed so I don't over
  consume the input stream.

 What's the problem with over-consuming a stream? In your case?


Well my concern is if it's read from the input stream, and then not used,
the next time I access it, I'm not certain what's happened to the buffer.
 However I suppose it's really a 2-level situation where the enumerator
pulls out some fixed chunk from a Handle or FD or what have you, and then
folds the iteratee over the buffer in some sized chunk.

In C++ I've used ideas like this example that a professor I had in college
showed me from a newsgroup he helped to moderate.

int main () {
std::cout  Word count on stdin:  
std::distance(std::istream_iteratorstd::string(std::cin),
std::istream_iteratorstd::string())  std::endl;
}

If the code were changed to be:

int main () {
std::cout  Character count on stdin:  
std::distance(std::istreambuf_iteratorchar(std::cin),
std::istreambuf_iteratorchar())  std::endl;
}

We get different behavior out of the upper level distance algorithm due to
the kind of iterator, while distance does a form of folding over the
iterators, but it's actually 

Re: [Haskell-cafe] Do I need to roll my own?

2010-03-31 Thread Gregory Collins
David Leimbach leim...@gmail.com writes:

 to block or perhaps timeout, depending on the environment, looking for
 some String on an input Handle, and it appears that iteratee works
 in a very fixed block size.  While a fixed block size is ok, if I can
 put back unused bytes into the enumerator somehow (I may need to put a
 LOT back in some cases, but in the common case I will not need to put
 any back as most expect-like scripts typically catch the last few
 bytes of data sent before the peer is blocked waiting for a
 response...)

See IterGV from the iteratee lib:

http://hackage.haskell.org/packages/archive/iteratee/0.3.1/doc/html/Data-Iteratee-Base.html#t%3AIterGV

The second argument to the Done constructor is for the portion of the
input that you didn't use. If you use the Monad instance, the unused
input is passed on (transparently) to the next iteratee in the chain.

If you use attoparsec-iteratee
(http://hackage.haskell.org/packages/archive/attoparsec-iteratee/0.1/doc/html/Data-Attoparsec-Iteratee.html),
you could write expect as an attoparsec parser:


{-# LANGUAGE OverloadedStrings #-}

import Control.Applicative
import Control.Monad.Trans (lift)
import Data.Attoparsec hiding (Done)
import Data.Attoparsec.Iteratee
import qualified Data.ByteString as S
import Data.ByteString (ByteString)
import Data.Iteratee
import Data.Iteratee.IO.Fd
import Data.Iteratee.WrappedByteString
import Data.Word (Word8)
import System.IO
import System.Posix.IO

expect :: (Monad m) = ByteString
- IterateeG WrappedByteString Word8 m ()
expect s = parserToIteratee (p  return ())
  where
p = string s | (anyWord8  p)


dialog :: (Monad m) =
  IterateeG WrappedByteString Word8 m a   -- ^ output end
   - IterateeG WrappedByteString Word8 m ()
dialog outIter = do
expect login:
respond foo\n
expect password:
respond bar\n
return ()

  where
respond s = do
_ - lift $ enumPure1Chunk (WrapBS s) outIter = run
return ()


main :: IO ()
main = do
hSetBuffering stdin NoBuffering
hSetBuffering stdout NoBuffering
enumFd stdInput (dialog output) = run
  where
output = IterateeG $ \chunk -
 case chunk of
   (EOF _)- return $ Done () chunk
   (Chunk (WrapBS s)) - S.putStr s 
 hFlush stdout 
 return (Cont output Nothing)


Usage example:

$ awk 'BEGIN { print login:; fflush(); system(sleep 2); \
   print password:; fflush(); }' | runhaskell Expect.hs
foo
bar

N.B. for some reason enumHandle doesn't work here w.r.t buffering, had
to go to POSIX i/o to get the proper buffering behaviour.

G
--
Gregory Collins g...@gregorycollins.net
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Do I need to roll my own?

2010-03-31 Thread David Leimbach
On Wed, Mar 31, 2010 at 12:02 PM, Gregory Collins
g...@gregorycollins.netwrote:

 David Leimbach leim...@gmail.com writes:

  to block or perhaps timeout, depending on the environment, looking for
  some String on an input Handle, and it appears that iteratee works
  in a very fixed block size.  While a fixed block size is ok, if I can
  put back unused bytes into the enumerator somehow (I may need to put a
  LOT back in some cases, but in the common case I will not need to put
  any back as most expect-like scripts typically catch the last few
  bytes of data sent before the peer is blocked waiting for a
  response...)

 See IterGV from the iteratee lib:


 http://hackage.haskell.org/packages/archive/iteratee/0.3.1/doc/html/Data-Iteratee-Base.html#t%3AIterGV

 The second argument to the Done constructor is for the portion of the
 input that you didn't use. If you use the Monad instance, the unused
 input is passed on (transparently) to the next iteratee in the chain.


 If you use attoparsec-iteratee
 (
 http://hackage.haskell.org/packages/archive/attoparsec-iteratee/0.1/doc/html/Data-Attoparsec-Iteratee.html
 ),
 you could write expect as an attoparsec parser:


 
 {-# LANGUAGE OverloadedStrings #-}

 import Control.Applicative
 import Control.Monad.Trans (lift)
 import Data.Attoparsec hiding (Done)
 import Data.Attoparsec.Iteratee
 import qualified Data.ByteString as S
 import Data.ByteString (ByteString)
 import Data.Iteratee
 import Data.Iteratee.IO.Fd
 import Data.Iteratee.WrappedByteString
 import Data.Word (Word8)
 import System.IO
 import System.Posix.IO

 expect :: (Monad m) = ByteString
- IterateeG WrappedByteString Word8 m ()
 expect s = parserToIteratee (p  return ())
  where
p = string s | (anyWord8  p)


 dialog :: (Monad m) =
  IterateeG WrappedByteString Word8 m a   -- ^ output end
   - IterateeG WrappedByteString Word8 m ()
 dialog outIter = do
expect login:
respond foo\n
expect password:
respond bar\n
return ()

  where
respond s = do
_ - lift $ enumPure1Chunk (WrapBS s) outIter = run
return ()


 main :: IO ()
 main = do
hSetBuffering stdin NoBuffering
hSetBuffering stdout NoBuffering
enumFd stdInput (dialog output) = run
  where
output = IterateeG $ \chunk -
 case chunk of
   (EOF _)- return $ Done () chunk
   (Chunk (WrapBS s)) - S.putStr s 
 hFlush stdout 
 return (Cont output Nothing)
 

 Usage example:

$ awk 'BEGIN { print login:; fflush(); system(sleep 2); \
   print password:; fflush(); }' | runhaskell Expect.hs
foo
bar

 N.B. for some reason enumHandle doesn't work here w.r.t buffering, had
 to go to POSIX i/o to get the proper buffering behaviour.

 That's pretty neat actually.  I'm going to have to incorporate timeouts
into something like that (and attoparsec-iteratee doesn't install for me for
some reason, I'll try again today).

That leads me to another question in another thread I'm about to start.

Dave



 G
 --
 Gregory Collins g...@gregorycollins.net

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] building encoding on Windows?

2010-03-31 Thread Andrew Coppin

Han Joosten wrote:

The haskell platform should take care of a lot of installation pain,
specially for the non-technical users.


I note with dismay that there's a proposal to remove OpenGL from HP. 
Assuming this gets approved, that'll be one less library that you can 
use on Windows. I thought the idea of HP was to add new stuff, not 
remove existing stuff...



A new version is due to be release
pretty soon (somewhere begin april). It has Mingw and Msys included, and
also some pre-built binaries like cabal and haddock.


Plain GHC has included Haddock for a while now. It seems to me that 
including the entirity of MinGW and MSYS is overkill, but what do I know 
about the matter? I was however under the impression that to use Unix 
emulators such as these, you have to run everything from a seperate 
shell rather than the usual Windows shell.



It should be possible
for a lot of packages to say 'cabal install package' at the command prompt
to get your package up and running. I think that this is pretty cool, and
most non-technical users should be able to get this to work without a lot of
pain. 
  


Having gone through the pain of setting up OpenSUSE and convincing it to 
install GHC, it was quite impressive to use. On Linux, it seems you can 
actually type cabal install zlib and reasonably expect it to actually 
work. Typically, it'll crash and say install zlib-devel. You install 
that, rerun the command, and somehow it knows that you've installed the 
headers and where you've put them, and It Just Works(tm). Which is quite 
impressive.


What isn't impressive is that if you ask to install something, and one 
of its dependencies fails to build, the failure message will get burried 
in amoungst several pages of other stuff. At the end it will say 
something like package Foo failed to build. The build failure was: exit 
code 1. Yeah, that's really helpful. Fortunately, if you just rebuild, 
it will only try to rebuild the missing dependences, so you can usually 
catch the real error message scroll past (invariably some C headers not 
installed). I also have a personal wish that more tools would make use 
of coloured text output to make it easier to see what's happening in the 
sea of text scrolling past. (We even have a fully portable Haskell 
library for doing this - and it even builds cleanly on Windows!)


My hope is that the more useful C libraries will get added to HP so that 
I can start using them. (E.g., I'd love to be able to do sound synthesis 
using Haskell, but there aren't any libraries for accessing the sound 
hardware...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Do I need to roll my own?

2010-03-31 Thread David Leimbach
On Wed, Mar 31, 2010 at 12:24 PM, David Leimbach leim...@gmail.com wrote:



 On Wed, Mar 31, 2010 at 12:02 PM, Gregory Collins g...@gregorycollins.net
  wrote:

 David Leimbach leim...@gmail.com writes:

  to block or perhaps timeout, depending on the environment, looking for
  some String on an input Handle, and it appears that iteratee works
  in a very fixed block size.  While a fixed block size is ok, if I can
  put back unused bytes into the enumerator somehow (I may need to put a
  LOT back in some cases, but in the common case I will not need to put
  any back as most expect-like scripts typically catch the last few
  bytes of data sent before the peer is blocked waiting for a
  response...)

 See IterGV from the iteratee lib:


 http://hackage.haskell.org/packages/archive/iteratee/0.3.1/doc/html/Data-Iteratee-Base.html#t%3AIterGV

 The second argument to the Done constructor is for the portion of the
 input that you didn't use. If you use the Monad instance, the unused
 input is passed on (transparently) to the next iteratee in the chain.


 If you use attoparsec-iteratee
 (
 http://hackage.haskell.org/packages/archive/attoparsec-iteratee/0.1/doc/html/Data-Attoparsec-Iteratee.html
 ),
 you could write expect as an attoparsec parser:


 
 {-# LANGUAGE OverloadedStrings #-}

 import Control.Applicative
 import Control.Monad.Trans (lift)
 import Data.Attoparsec hiding (Done)
 import Data.Attoparsec.Iteratee
 import qualified Data.ByteString as S
 import Data.ByteString (ByteString)
 import Data.Iteratee
 import Data.Iteratee.IO.Fd
 import Data.Iteratee.WrappedByteString
 import Data.Word (Word8)
 import System.IO
 import System.Posix.IO

 expect :: (Monad m) = ByteString
- IterateeG WrappedByteString Word8 m ()
 expect s = parserToIteratee (p  return ())
  where
p = string s | (anyWord8  p)


 dialog :: (Monad m) =
  IterateeG WrappedByteString Word8 m a   -- ^ output end
   - IterateeG WrappedByteString Word8 m ()
 dialog outIter = do
expect login:
respond foo\n
expect password:
respond bar\n
return ()

  where
respond s = do
_ - lift $ enumPure1Chunk (WrapBS s) outIter = run
return ()


 main :: IO ()
 main = do
hSetBuffering stdin NoBuffering
hSetBuffering stdout NoBuffering
enumFd stdInput (dialog output) = run
  where
output = IterateeG $ \chunk -
 case chunk of
   (EOF _)- return $ Done () chunk
   (Chunk (WrapBS s)) - S.putStr s 
 hFlush stdout 
 return (Cont output Nothing)
 

 Usage example:

$ awk 'BEGIN { print login:; fflush(); system(sleep 2); \
   print password:; fflush(); }' | runhaskell Expect.hs
foo
bar

 N.B. for some reason enumHandle doesn't work here w.r.t buffering, had
 to go to POSIX i/o to get the proper buffering behaviour.

 That's pretty neat actually.  I'm going to have to incorporate timeouts
 into something like that (and attoparsec-iteratee doesn't install for me for
 some reason, I'll try again today).


worked fine today...



 That leads me to another question in another thread I'm about to start.


And that other thread is not going to happen, because I realized I was just
having issues with non-strict vs strict evaluation :-)  It makes perfect
sense now...

gist is:

timeout (10 ^ 6) $ return $ sum [1..]

and

timeout (10 ^ 6) $! return $ sum [1..]

will not timeout, and will hang while

timeout (10 ^ 6) $ return $! sum [1..]

does timeout... and everything in the Haskell universe is nice and
consistent.

Dave



 Dave



 G
 --
 Gregory Collins g...@gregorycollins.net



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Hughes' parallel annotations for fixing a space leak

2010-03-31 Thread Heinrich Apfelmus
Dear haskell-cafe,

I've been thinking about space leaks lately. In particular, I've been
studying the tricky example with pairs

break [] = ([],[])
break (x:xs) = if x == '\n' then ([],xs) else (x:ys,zs)
where (ys,zs) = break xs

which was discussed in the papers

Jan Sparud. Fixing some space leaks without a garbage collector.
http://bit.ly/cOxcVJ

Philip Wadler. Fixing some space leaks with a garbage collector.
http://homepages.inf.ed.ac.uk/wadler/topics/garbage-collection.html

As I understand it, GHC implements the technique from Sparud's paper, so
this is a solved problem. (It will only kick in when compiled with
optimization, though, so -O0 will make the above leak in GHC 6.10.4 if I
checked that correctly.)


Now, I'm wondering about alternative solutions. Wadler mentions some
sort of parallel combinators

break xs = (PAR before '\n' ys, PAR after '\n' zs)
where (ys,zs) = SYNCLIST xs

which were introduced by John Hughes in his Phd thesis from 1983. They
are intriguing! Unfortunately, I haven't been able to procure a copy of
Hughes' thesis, either electronic or in paper. :( Can anyone help? Are
there any other resources about this parallel approach?



Regards,
Heinrich Apfelmus

--
http://apfelmus.nfshost.com

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Shootout update

2010-03-31 Thread Isaac Gouy
On Mar 30, 1:26 am, Simon Marlow marlo...@gmail.com wrote:
 The shootout (sorry, Computer Language Benchmarks Game) ...

In a different time, in a different place, the shootout meant a football once 
again flying over the cross bar or harmlessly into the arms of the keeper and 
England once more exiting an international competition.

Here in the west it has meant slaughter - back in 2004 crossed-pistols were 
suggested as the website image.

Wading through Google search results comprised of porn sites and college mass 
murder just wasn't a bright happy start to the day for me - so after Virginia 
Tech I changed the name.

I should probably have moved everything to a new project (a new URL).


  
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] cabal through PHPProxy

2010-03-31 Thread Valery V. Vorotyntsev
 i'm using PHPProxy to go on the internet (www.phpproxy.fr).
 How can i tell cabal to use this?
...
 PHPProxy is a web based proxy.  To use it, you have to type the http
 address you want in a field on the page.

You still need some web access in order to access a web-based proxy...
Is www.phpproxy.fr the only outer site you can connect directly to?

I am just curious of what kind of LAN you have got. :) How do you
update your OS, anyway? How do you download files?

* * *

Try the following:

# Download phpproxy
wget -c 'http://idea.hosting.lv/a/phpproxy/phpproxy-0.6.tar.gz'

# Unpack
tar -xzf phpproxy-0.6.tar.gz

# Launch PHPProxy client
cd phpproxy-0.6
python phpproxy.py

# In another terminal, try running cabal with `http_proxy'
http_proxy=http://localhost:8080/ cabal update --verbose=3

Please, report your progress.

Good luck!

-- 
vvv
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Do I need to roll my own?

2010-03-31 Thread John Lato
Hi Dave,

 From: David Leimbach leim...@gmail.com

 I'm looking at iteratee as a way to replace my erroneous and really
 inefficient lazy-IO-based backend for an expect like Monad DSL I've been
 working for about 6 months or so now on and off.

 The problem is I want something like:

 expect some String
 send some response

 to block or perhaps timeout, depending on the environment, looking for some
 String on an input Handle, and it appears that iteratee works in a very
 fixed block size.  While a fixed block size is ok, if I can put back unused
 bytes into the enumerator somehow (I may need to put a LOT back in some
 cases, but in the common case I will not need to put any back as most
 expect-like scripts typically catch the last few bytes of data sent before
 the peer is blocked waiting for a response...)

I'm quite sure I don't know what you're trying to do.  The only time I
can think of needing this is if you're running an iteratee on a file
handle, keeping the handle open, then running another iteratee on it.
Is this what you're doing?  If so, I would make a new run function:

runResidue :: (Monad m, SC.StreamChunk s el) = IterateeG s el m a - m (a, s)
runResidue iter = runIter iter (EOF Nothing) = \res -
  case res of
Done x s - return (x, s)
Cont _ e - error $ control message:  ++ show e

This function will return the unused portion of the stream, then you
can do this:

enumResidue :: Handle - s - EnumeratorGM s el m a
enumResidue h s = enumPure1Chunk s . enumHandle h

Is this what you need?  If I'm completely wrong about what you're
trying to do (or you're using multiple threads), there are other
options.

You also may want to look at iteratee-HEAD.  The implementation has
been cleaned up a lot, the block sizes are user-specified, and there's
an exception-based, user-extensible mechanism for iteratees to alter
enumerator behavior.

Sincerely,
John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Do I need to roll my own?

2010-03-31 Thread David Leimbach
On Wed, Mar 31, 2010 at 2:12 PM, John Lato jwl...@gmail.com wrote:

 Hi Dave,

  From: David Leimbach leim...@gmail.com
 
  I'm looking at iteratee as a way to replace my erroneous and really
  inefficient lazy-IO-based backend for an expect like Monad DSL I've been
  working for about 6 months or so now on and off.
 
  The problem is I want something like:
 
  expect some String
  send some response
 
  to block or perhaps timeout, depending on the environment, looking for
 some
  String on an input Handle, and it appears that iteratee works in a very
  fixed block size.  While a fixed block size is ok, if I can put back
 unused
  bytes into the enumerator somehow (I may need to put a LOT back in some
  cases, but in the common case I will not need to put any back as most
  expect-like scripts typically catch the last few bytes of data sent
 before
  the peer is blocked waiting for a response...)

 I'm quite sure I don't know what you're trying to do.  The only time I
 can think of needing this is if you're running an iteratee on a file
 handle, keeping the handle open, then running another iteratee on it.
 Is this what you're doing?  If so, I would make a new run function:

 runResidue :: (Monad m, SC.StreamChunk s el) = IterateeG s el m a - m (a,
 s)
 runResidue iter = runIter iter (EOF Nothing) = \res -
  case res of
Done x s - return (x, s)
Cont _ e - error $ control message:  ++ show e

 This function will return the unused portion of the stream, then you
 can do this:

 enumResidue :: Handle - s - EnumeratorGM s el m a
 enumResidue h s = enumPure1Chunk s . enumHandle h

 Is this what you need?  If I'm completely wrong about what you're
 trying to do (or you're using multiple threads), there are other
 options.


The problem is I am not sure what it was I needed to get started to begin
with.  For a moment it seemed that I could be throwing out data that's been
read, but not yet fed to an iteratee step.  If that's not the case, I'll
never need to put back any characters. The attoparsec-iteratee example
posted, plus some experimentation has led me to believe I don't need to
worry about this sort of thing for the kind of processing I'm looking to do.



 You also may want to look at iteratee-HEAD.  The implementation has
 been cleaned up a lot, the block sizes are user-specified, and there's
 an exception-based, user-extensible mechanism for iteratees to alter
 enumerator behavior.


That's very compelling.

Here's the properties of the system I'm trying to build (in fact I've built
this system with Haskell already months ago, but trying to evaluate if
iteratee can fix problems I've got now)

1. Must have an expect-like language to a subprocess over a pair of Handles
such that I can query what is normally a command line interface as a polling
refresher thread to a cache.  Note that there may be many sub-processes with
a poller/cache (up to 6 so far).  Data produced is dumped as records to
stdout from each thread such that the process that spawned this haskell
program can parse those records and update it's view of this particular part
of the world.

2. Must be able to interleave commands between polled record data from the
processes in 1.  These commands come in over this process's stdin from the
program that started the Haskell program.

3. The polling process in 1, must be able to respond to a timeout situation.
 In this system, a cable could become disconnected or a part of the system
could become unavailable, or a part of the system underneath could become
unreliable and require a restart to guarantee the serviceability of the
whole system.

I have 1 and 2 working fine in a current iteration of the system, but
because we're dealing with a pretty complex stack, 3 is really necessary
too, and I've conquered my timeout issues from earlier in a reasonable
enough way.  The problem is that the timeout handler apparently runs into
problems that I think stem from lazyIO not having been evaluated on a handle
yet, but the timeout handler has invalidated that handle, and then that
thunk, which was partially applied, is now talking to some broken value.

As far as I know, I can't go back and prevent previously bound thunks that
have the wrong handle from executing, if that is truly what's happening.
 What I'd like to do is prevent that situation from ever happening to begin
with, or at least rule it out.

I'm hoping version 2 of this system to be based on iteratee and avoid this
sort of problem at all.

The alternative is to write this in another language, but that throws out a
lot of nice and simple code that's present in the non IO bits of this code,
for dealing with parsing and data serialization.

Dave


 Sincerely,
 John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: iteratee: Do I need to roll my own?

2010-03-31 Thread Bas van Dijk
On Wed, Mar 31, 2010 at 7:42 PM, David Leimbach leim...@gmail.com wrote:
 What I mean is let's say the stream has
 abcd efg abcd efg
 and then I run some kind of iteratee computation looking for abcd

You could adapt the 'heads' function from the iteratee package to do this:

http://hackage.haskell.org/packages/archive/iteratee/0.3.4/doc/html/src/Data-Iteratee-Base.html#heads

 and the block size was fixed to cause a read 1024 bytes, but returns as much
 as it can providing it to the iteratee to deal with.  The iteratee, which I
 want to implement Expect like behavior, would really only want to read up to
 abcd consuming that from the input stream.  Does the iteratee get the
 whole stream that was read by the enumerator, or is it supplied a single
 atomic unit at a time, such as a character, in which I can halt the
 consumption of the streamed data?

The iteratee will be applied to the whole stream that was read by the
enumerator. You should ensure that the part of this input stream which
is not needed for the result is saved in the 'Done' constructor so
that other iteratees may consume it.

 What I don't want to have happen is my consuming bytes from the input
 Handle, only to have them ignored, as the second instance of abcd could be
 important.

Note that an IterateeG has an instance for Monad which allows you to
sequentially compose iteratees. If you write a 'match' iteratee (by
adapting the 'heads' function I mentioned earlier which matches a
given string against the first part of a stream) you can compose these
sequentially:

foo = match abcd  match efg  foo

The first match will be applied to the stream that was read by the
enumerator. It will consume the abcd and saves the rest of the
stream (in the 'Done' constructor). The second match will first be
applied to the saved stream from the first match. If this stream was
not big enough the iteratee will ask for more (using the 'Cont'
constructor). The enumerator will then do a second read and applies
the continuation (stored in the 'Cont' constructor) to the new stream.

You may also consider using actual parser combinators build on top of iteratee:

http://hackage.haskell.org/package/attoparsec-iteratee
http://hackage.haskell.org/package/iteratee-parsec

(I typed this in a hurry so some things may be off a bit)

regards,

Bas
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Seeking advice about monadic traversal functions

2010-03-31 Thread Darryn Reid
Heinrich,

Thanks for your excellent response! Indeed, it was the rebuilding of the
tree that had me stumped. I also see the benefits of using the lift
functions, thanks again for this insight.

Darryn.

On Wed, 2010-03-31 at 12:44 +0200, Heinrich Apfelmus wrote:
 Darryn Reid wrote:
 
  I've coded a (fairly) general rewriting traversal - I suspect the
  approach might be generalisable to all tree-like types, but this doesn't
  concern me much right now. My purpose is for building theorem provers
  for a number of logics, separating the overall control mechanism from
  the specific rules for each logic.
  
  The beauty of Haskell here is in being able to abstract the traversal
  away from the specific reduction rules and languages of the different
  logics. I have paired down the code here to numbers rather than modal
  formulas for the sake of clarity and simplicity. My question is
  two-fold:
  1. Does my representation of the traversal seem good or would it be
  better expressed differently? If so, I'd appreciate the thoughts of more
  experienced Haskell users.
 
 Looks fine to me, though I have no experience with tree
 rewriting. :) I'd probably write it like this:
 
  data Tableau a =   Nil   -- End of a path
   | Single a (Tableau a)  -- Conjunction
   | Fork (Tableau a) (Tableau a)  -- Disjunction
   deriving (Eq, Show)
  data Action = Cut | Next
  type Rewriter m a = Tableau a - m (Tableau a, Action)
 
 rewrite :: (Monad m) = Rewriter m a - Tableau a - m (Tableau a)
 rewrite f t = f t = (uncurry . flip) go
 where
 go Cut  t = return t
 go Next Nil   = return Nil
 go Next (Single x t1) = liftM (Single x) (rewrite f t1)
 go Next (Fork t1 t2 ) = liftM2 Fork (rewrite f t1)
 (rewrite f t2)
 
 In particular,  liftM  and  liftM2  make it apparent that we're just
 wrapping the result in a constructor.
 
 
 In case you want more flexibility in moving from children to parents,
 you may want to have a look at zippers
 
   http://en.wikibooks.org/wiki/Haskell/Zippers
 
  2. I cannot really see how to easily extend this to a queue-based
  breadth-first traversal, which would give me fairness. I'm sure others
  must have a good idea of how to do what I'm doing here except in
  breadth-first order; I'd appreciate it very much if someone could show
  me how to make a second breadth-first version.
 
 This is more tricky than I thought! Just listing the nodes in
 breadth-first order is straightforward, but the problem is that you
 also want to build the result tree. Depth-first search follows the tree
 structure more closely, so building the result was no problem.
 
 The solution is to solve the easy problem of listing all nodes
 in breadth-first order first, because it turns out that it's possible to
 reconstruct the result tree from such a list! In other words, it's
 possible to run breadth-first search in reverse, building the tree from
 a list of nodes.
 
 How exactly does this work? If you think about it, the analogous problem
 for depth-first search is not too difficult, you just walk through the
 list of nodes and build a stack; pushing  Nil  nodes and combining the
 top two items when encountering  Fork  nodes. So, for solving the
 breadth-first version, the idea is to build a queue instead of a stack.
 
 The details of that are a bit tricky of course, you have to be careful
 when to push what on the queue. But why bother being careful if we have
 Haskell? I thus present the haphazardly named:
 
 
   Lambda Fu, form 132 - linear serpent inverse
 
 The idea is to formulate breadth-first search in a way that is *trivial*
 to invert, and the key ingredient to that is a *linear* function, i.e.
 one that never discards or duplicates data, but only shuffles it around.
 Here is what I have in mind:
 
 {-# LANGUAGE ViewPatterns #-}
 import Data.Sequence as Seq  -- this will be our queue type
 import Data.List as List
 
 type Node a  = Tableau a -- just a node without children (ugly type)
 type State a = ([Action],[(Action,Node a)], Seq (Tableau a))
 queue (_,_,q) = q
 nodes (_,l,_) = l
 
 analyze :: State a - State a
 analyze (Cut :xs, ts, viewl - t   : q) =
 (xs, (Cut , t ) : ts, q )
 analyze (Next:xs, ts, viewl - Nil : q) =
 (xs, (Next, Nil   ) : ts, q )
 analyze (Next:xs, ts, viewl - Single x t1 : q) =
 (xs, (Next, Single x u) : ts, q | t1 )
 analyze (Next:xs, ts, viewl - Fork t1 t2  : q) =
 (xs, (Next, Fork u u  ) : ts, (q | t1) | t2 )
 u = Nil -- or undefined
 
 bfs xs t = nodes $
 until (Seq.null . queue) analyze (xs,[],singleton t)
 
 So,  bfs  just applies  analyze  repeatedly on a suitable state which
 includes the queue, the list of nodes in breadth-first order and a
 stream of actions. (This is 

[Haskell-cafe] [OT?] Haskell-inspired functions for BASH

2010-03-31 Thread Patrick LeBoutillier
Hi all,

I've been studying Haskell for about a year now, and I've really come
to like it. In my daily work I write a lot of BASH shell scripts and I
thought I'd try add some of the haskell features and constructs to
BASH to make my scripting life a bit easier. So I've been working on a
small BASH function library that implements some basic functional
programming building blocks.

Note: There is no actual Haskell code involved here.

I put up the full manpage here:
http://hpaste.org/fastcgi/hpaste.fcgi/view?id=24564
Source is here: http://svn.solucorp.qc.ca/repos/solucorp/bashkell/trunk/trunk/

All this is very prototypical, but here is an example of some of the
stuff I've got so far (map, filter, foldr):

$ ls data
1.txt  2.txt

# basic map, argument goes on the command line
$ ls -d data/* | map basename
1.txt
2.txt

# map with lambda expression
$ ls -d data/* | map '\f - basename $f .txt'
1
2

# simple filter, also works with lambda
$ ls -d data/* | map basename | filter 'test 1.txt ='
1.txt

# sum
$ ls -d data/* | map '\f - basename $f .txt' | foldr '\x acc - echo
$(($x + $acc))' 0
3

Basically I'm looking for a bit of feedback/info:
- Does anyone know if there are already similar projets out there?
- Does anyone find this interesting?
- Any other comment/suggestion/feedback
- Where's a good place to promote such a project?


Thanks a lot,

Patrick LeBoutillier


--
=
Patrick LeBoutillier
Rosemère, Québec, Canada
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [OT?] Haskell-inspired functions for BASH

2010-03-31 Thread Ivan Miljenovic
On 1 April 2010 11:05, Patrick LeBoutillier
patrick.leboutill...@gmail.com wrote:
 Basically I'm looking for a bit of feedback/info:
 - Does anyone know if there are already similar projets out there?

On Hackage: LambdaShell, language-sh, HSH, Hashell (dead), only, Shellac

Note that not all of these might be directly similar: I found them by
doing a quick search for shell and looking in the console section.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Haskell on Debian

2010-03-31 Thread Alex Rozenshteyn
I tend to install haskell packages from apt whenever possible.  One such
package is unix, which appears to come provided by the ghc6 debian
package.  I'm trying to cabal install lambdabot and getting the following:

$ cabal install lambdabot
Resolving dependencies...
Configuring lambdabot-4.2.2.1...
Preprocessing executables for lambdabot-4.2.2.1...
Building lambdabot-4.2.2.1...

Main.hs:11:7:
Could not find module `System.Posix.Signals':
  It is a member of the hidden package `unix-2.4.0.0'.
  Perhaps you need to add `unix' to the build-depends in your .cabal
file.
  Use -v to see a list of the files searched for.
cabal: Error: some packages failed to install:
lambdabot-4.2.2.1 failed during the building phase. The exception was:
ExitFailure 1

I have a feeling this has something to do with the interaction between cabal
and apt...

Does anyone have any advice for fixing this issue other than just adding
unix to the lambdabot.cabal file?


-- 
 Alex R
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell on Debian

2010-03-31 Thread Ivan Miljenovic
On 1 April 2010 11:42, Alex Rozenshteyn rpglove...@gmail.com wrote:
 Main.hs:11:7:
     Could not find module `System.Posix.Signals':
       It is a member of the hidden package `unix-2.4.0.0'.
       Perhaps you need to add `unix' to the build-depends in your .cabal
 file.

Interesting, because unix _is_ listed in build-depends in the .cabal
file for lambdabot.

Does ghc-pkg check complain about unix?  Does ghc-pkg list unix
say it's there?

 I have a feeling this has something to do with the interaction between cabal
 and apt...

Highly unlikely unless there's an inconsistency in the packages on your system.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Haskell on Debian

2010-03-31 Thread Alex Rozenshteyn
$ ghc-pkg check

outputs nothing

$ ghc-pkg list unix
/var/lib/ghc-6.12.1/package.conf.d
   unix-2.4.0.0
/home/alex/.ghc/x86_64-linux-6.12.1/package.conf.d

unix appears to be in the build-depends of the Library, but not in the
build-depends of Executable lambdabot
Adding unix to the second build-depends appears to fix the error (but now
it complains that base is hidden).

On Wed, Mar 31, 2010 at 8:48 PM, Ivan Miljenovic
ivan.miljeno...@gmail.comwrote:

 On 1 April 2010 11:42, Alex Rozenshteyn rpglove...@gmail.com wrote:
  Main.hs:11:7:
  Could not find module `System.Posix.Signals':
It is a member of the hidden package `unix-2.4.0.0'.
Perhaps you need to add `unix' to the build-depends in your .cabal
  file.

 Interesting, because unix _is_ listed in build-depends in the .cabal
 file for lambdabot.

 Does ghc-pkg check complain about unix?  Does ghc-pkg list unix
 say it's there?

  I have a feeling this has something to do with the interaction between
 cabal
  and apt...

 Highly unlikely unless there's an inconsistency in the packages on your
 system.

 --
 Ivan Lazar Miljenovic
 ivan.miljeno...@gmail.com
 IvanMiljenovic.wordpress.com




-- 
 Alex R
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] why doesn't time allow diff of localtimes ?

2010-03-31 Thread briand
On Wed, 31 Mar 2010 01:32:56 -0400
wagne...@seas.upenn.edu wrote:

 Two values of LocalTime may well be computed with respect to
 different timezones, which makes the operation you ask for dangerous.
 First convert to UTCTime (with localTimeToUTC), then compare.

that makes sense.  unfortunately getting the current timezone to
convert to UTC results in the dreaded IO contamination problem...

Brian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Trying to figure out a segfault caused by haskeline.

2010-03-31 Thread ryan winkelmaier
Hi,

Nobody seems to have any idea what is happening yet. Though thanks for
trying dagit (forgot to add haskell-cafe to my repliess to him).

Quick update incase it helps, compiling with profiling and running with the
-xc option results in,

Main.CAF:runInputT_rOASystem.Posix.Files.CAFSegmentation fault

I'm still working on it but could it be the configuration file? thats the
thing the haskeline accesses files for right?


On Mon, Mar 29, 2010 at 8:28 PM, ryan winkelmaier syfra...@gmail.comwrote:

 Hey everyone,

 I'm looking for help with a seg fault that takes out both my ghci and darcs
 as well as anything that uses haskeline. A bug on the haskeline trac hasn't
 gotten any response so I figured I might as well figure this out myself and
 get ghci up and running again.

 Using the test program below I get the same segmentation fault, so I run it
 using gdb and get the following,

 Program received signal SIGSEGV, Segmentation fault.
 0x0053fdce in base_ForeignziCziString_zdwa_info ()

 My knowledge of this is very limited from here on out so here is what I was
 able to get together.

 On the 20th call of base_ForeignziCziString_zdwa_info
 r14 is 0 so

 0x0053fdce +22:movsbq (%r14),%rax

 produces the segfault.
 From what I understand this is happening in the Foreign.C.String module but
 thats as much as I know.
 Anyone have advice on where to go next?

 System info:
 Distribution: gentoo amd64
 Ghc version: currently 6.12.1 (though the segfault happends on any of the
 ones with haskeline)
 Haskeline version: 0.6.2.2

 Here is the test program


 
 module Main where

 import System.Console.Haskeline
 import System.Environment

 {--
 Testing the line-input functions and their interaction with ctrl-c signals.

 Usage:
 ./Test  (line input)
 ./Test chars(character input)
 --}

 mySettings :: Settings IO
 mySettings = defaultSettings {historyFile = Just myhist}

 main :: IO ()
 main = do
 args - getArgs
 let inputFunc = case args of
 [chars] - fmap (fmap (\c - [c])) . getInputChar
 _ - getInputLine
 runInputT mySettings $ withInterrupt $ loop inputFunc 0
 where
 loop inputFunc n = do
 minput -  handleInterrupt (return (Just Caught interrupted))
 $ inputFunc (show n ++ :)
 case minput of
 Nothing - return ()
 Just quit - return ()
 Just q - return ()
 Just s - do
 outputStrLn (line  ++ show n ++ : ++ s)
 loop inputFunc (n+1)

 


 Syfran

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: why doesn't time allow diff of localtimes ?

2010-03-31 Thread Maciej Piechotka
On Wed, 2010-03-31 at 19:29 -0700, bri...@aracnet.com wrote:
 On Wed, 31 Mar 2010 01:32:56 -0400
 wagne...@seas.upenn.edu wrote:
 
  Two values of LocalTime may well be computed with respect to
  different timezones, which makes the operation you ask for dangerous.
  First convert to UTCTime (with localTimeToUTC), then compare.
 
 that makes sense.  unfortunately getting the current timezone to
 convert to UTC results in the dreaded IO contamination problem...
 
 Brian

Hmm. Where do you get the local times from in the first place?

Regards

PS. As the timezone can change in runtime (like change from summer time
to standard or simply crossing the 'border' with laptop) I don't think
that there is escape from IO.


signature.asc
Description: This is a digitally signed message part
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: why doesn't time allow diff of localtimes ?

2010-03-31 Thread briand
On Thu, 01 Apr 2010 06:20:25 +0200
Maciej Piechotka uzytkown...@gmail.com wrote:

 On Wed, 2010-03-31 at 19:29 -0700, bri...@aracnet.com wrote:
  On Wed, 31 Mar 2010 01:32:56 -0400
  wagne...@seas.upenn.edu wrote:
  
   Two values of LocalTime may well be computed with respect to
   different timezones, which makes the operation you ask for
   dangerous. First convert to UTCTime (with localTimeToUTC), then
   compare.
  
  that makes sense.  unfortunately getting the current timezone to
  convert to UTC results in the dreaded IO contamination problem...
  
  Brian
 
 Hmm. Where do you get the local times from in the first place?
 

read it from a file of course :-)

I think I've got it figured out.  it's not too ugly.

One interesting hole in the system is that buildTime can return a
LocalTime _or_ a UTCTime.  That means the same string used to
generate a time can give you two different times.

It seems as thought it should be restricted to always returning a
UTCTime.  If it's going to return a local time it should require an
extra argument of a timezone, shouldn't it ?

Brian
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: logfloat 0.12.1

2010-03-31 Thread wren ng thornton


-- logfloat 0.12.1


This package provides a type for storing numbers in the log-domain, 
primarily useful for preventing underflow when multiplying many 
probabilities as in HMMs and other probabilistic models. The package 
also provides modules for dealing with floating numbers correctly.



-- Changes since 0.12.0.1


* Fixed a number of bugs where LogFloat values would become NaN when 
they should not. These bugs involved using normal-space positive 
infinity and so would not affect clients using the package for 
probabilities.


The fixes do introduce some extra checks though. If anyone is using the 
package for probabilities in a large enough project and could run some 
benchmarks to see how 0.12.1 compares to 0.12.0.1 I'd love to hear the 
results. (I'm not sure that doing microbenchmarks would actually give 
much insight about real programs.) If the overhead is bad enough I can 
add a LogProb type which ensures things are proper probabilities in 
order to avoid the new checks.





-- Compatibility / Portability


The package is compatible with Hugs (September 2006) and GHC (6.8, 6.10, 
6.12). For anyone still using GHC 6.6, the code may still work if you 
replace LANGUAGE pragma by equivalent OPTIONS_GHC pragma.


The package is not compatible with nhc98 and Yhc because 
Data.Number.RealToFrac uses MPTCs. The other modules should be compatible.




-- Links


Homepage:
http://code.haskell.org/~wren/

Hackage:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/logfloat

Darcs:
http://code.haskell.org/~wren/logfloat/

Haddock (Darcs version):
http://code.haskell.org/~wren/logfloat/dist/doc/html/logfloat/

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe