[Haskell-cafe] Wouter-style expressions

2008-06-04 Thread Ron Alford
Here's something that should be an easy extension of Wouter's approach
to extensible datatypes, but I'm failing (possibly since it's 2:20am).
 I several classes of expressions I'm trying to represent (thus,
Wouter's approach), and  my first operation to implement over them is
printing.

Attached is a simplified version of what I'm doing.  Expressions are
composed of conjunctives (and) and atoms.  Atoms are composed of a
predicate string and a list of fillers (taken from 'Const' and 'Var').
 For example (in a lisp like syntax):
(and (pred c ?var1) (pred ?var1 ?var2))

To do this, I defined a 'Printable' class:
class Functor f = Printable f where
exprDoc :: f t - Doc

I think the 't' here will get me into trouble later.

Combinations of printable types are also printable:
instance (Printable f, Printable g) = Printable (f :+: g) where
exprDoc (Inr x) = exprDoc x
exprDoc (Inl y) = exprDoc y


Constants, variables, and atoms are defined to be printable:
instance Printable Var where
exprDoc (Var name) = text ('?':name)

instance Printable Const where
exprDoc (Const name) = text name

instance Printable f = Printable (Atomic (Expr f)) where
exprDoc (Atomic p tl) = parens $ hsep $
(text p) : (map (\ (In t) - exprDoc t) tl)

But the obvious definition for conjunction doesn't work:
instance Printable And where
exprDoc (And el) = sep (map exprDoc el)

GHC barfs, throwing:
Couldn't match expected type `f t' against inferred type `t1'
  `t1' is a rigid type variable bound by
   the type signature for `exprDoc' at WouterTest.hs:62:17
  Expected type: [f t]
  Inferred type: [t1]
In the second argument of `map', namely `el'
In the first argument of `sep', namely `(map exprDoc el)'


I've attached the code.  Compile and inspect with:
$ ghci -fglasgow-exts -fallow-overlapping-instances WouterTest.hs

test1 works great (other than needing some redundant typing, any hints?).

test2 needs the definition of 'Printable And' to print, but I haven't
gotten that to work yet.  It also needs the redundant typing.


-Ron

However, I tried making more complex expression (conjunction, in this
case), but I can't get the types to align.  In particular, if I
uncomment the obvious definition


WouterTest.hs
Description: Binary data
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Henning Thielemann

On Tue, 3 Jun 2008, Don Stewart wrote:

 I wrote up the second part of the tour of understanding low level
 performance in GHC here,

 http://reddit.com/r/programming/info/6lx36/comments/

 Follows on from the discussion last week about various performance
 related things.

Now the difficult question: How to write the 'mean' function in terms of
'sum' and 'length' while getting the same performance?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness leaks

2008-06-04 Thread Bernie Pope


On 04/06/2008, at 10:12 AM, Ronald Guida wrote:

I would ask, how do I examine the evaluation order of my code, but
the answer is already available: use a debugger.  Haskell already has
debugging tools that do exactly what I need.
(http://www.haskell.org/haskellwiki/Debugging)

In particular, HOOD looks extremely interesting.


I would recommend the GHCi debugger for looking at the evaluation  
order of code.

Single stepping can be very illuminating.

Bernie.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Wouter-style expressions

2008-06-04 Thread Wouter Swierstra

Hi Ron,

I've attached a revised file that solves your problem.

The solution is actually quite subtle. If you define the class as:

class Functor f = Printable f where
 exprDoc :: f t - Doc

you can't make recursive calls to sub-expressions. There is, after  
all, no reason to believe that t is Printable. The following choice  
of class is better:


class Functor f = Printable f where
  exprDoc :: Printable g = f (Expr g) - Doc

Now when you define the instance for And, you can safely make  
recursive calls to the list of subchildren.


To define the function you want, just add a wrapper to exprDoc:

ppExpr : Printable f = Expr f - Doc
ppExpr (In t) = exprDoc t

All the best,

Wouter

PS - You may want to add:

infixr 6 :+:

to you code. It'll save you a lot of parentheses!


This message has been checked for viruses but the contents of an attachment
may still contain software viruses, which could damage your computer system:
you are advised to perform your own checks. Email communications with the
University of Nottingham may be monitored as permitted by UK legislation.



WouterTest.hs
Description: Binary data


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A simple beginner question

2008-06-04 Thread Yitzchak Gale
Adam Smyczek wrote:
 data SampleType = A | B Int | C String | D --  etc.
 sampleTypes = [A, B 5, C test] :: [SampleType]
 How do I find for example element A in the sampleTypes list?

There have been many useful replies. But since Adam
originally announced that this is a beginner question, I think
some perspective is in order.

In Haskell, there is often no need at all for boolean-valued
functions to deconstruct a data structure. Usually pattern
matching does the job beautifully, and you structure your program
to exploit that. In the case that you do need them, though, the
previous responses are excellent suggestions.

In my experience, the ways that I get data out of an ADT,
from most common to most rare, are:

1. Just use pattern matching
2. Use record syntax to get selector functions
3. Define an Eq instance
4. Define per-constructor modifiers when the ADT is used as
  state in a state monad (this one is admittedly a pain in the neck)

I can't remember the last time I needed to write a function
like isA - it almost never comes up.

My opinion, YMMV.

Regards,
Yitz
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [ANNOUNCE] git-darcs-import 0.1

2008-06-04 Thread Peter Hercek

Aaron Denney wrote:

This is drifting off-topic, but...
On 2008-06-03, Peter Hercek [EMAIL PROTECTED] wrote:

Aaron Denney wrote:

--- cut ---

Darcs patches are pretty much an implicit rebase.

You cannot push patch B if it depends on patch A without also
  pushing A. And darcs currently does not alow you to reorder
  B before A


True.  This is a *feature* not a bug.  You shouldn't be able to do this
automatically, because it can't be done right.  You need to do this sort
of thing manually.  If you don't, the heuristics used will bite you at
some point.  When they do commute, there is no problem.


Sorry, I did not intend to indicate it should be done without doing the
 reordering first (by providing manual conflict resolution).


Git rebase works quite well even in cloned repositories.


Meh.  It can, if you're really really lucky.


Actually you are probably right, I needed to use a non-complicated
 workaround once (but I did it only about two times!). I might have
 been just lucky. I liked though that it did tell me what was wrong,
 in contrast to mercurial queues which just replicated both original
 branch and the rebased branch (so I finished with two copies on
 both sides at the end :-( ).

--- cut ---


Rebasing is doable in git as a one-repository operation because each
repository has multiple branches.  As darcs has one repo per branch,
it fundamentally needs to be done in multiple repos.

There are naturally two repos, upstream, and your-feature-development.

your-feature-development has a patch A that you want to rebase.

What you should do is pull upstream into new-tracking, then pull patch A
from your-feature-development into new-tracking.

If it applies with no problem, great: mv your-feature-development
your-feature-development-old; new-tracking your-feature-development.
Of course, in this case, you could have just pulled into
your-feature-development.  If there weren't any other patches to save in
the old your-feature-development, you can delete it instead of moving
it.

When there is a conflict, then you need to handle it somehow.  Neither
git nor darcs can do it automatically.  You can just record the merge
conflict and your resolution.  This keeps repos that pulled from you
valid, but this won't give you the clean history that you presumably
want.  So you need to combine the merger and cleanup into a new patch
with the same log message, etc.  It's true that git does make *this*
process very nice.


Ok, in such a simple case darcs can preserve the message too if the
 repository is not cloned (and you indicated that it does not really
 work with cloned repositories in git - I'm not an experienced git user).
 Just pull to the original repository and use amend-record to resolve
 the conflict and the message will be preserved. So I would tell that
 for *this* *simple* case darcs is better.

But what about this git rebasing option? How to do it more easily
 (than the solution I know and I described it later) in darcs?

using git-rebase --onto master next topic to get from:
o---o---o---o---o  master
 \
  o---o---o---o---o  next
   \
o---o---o  topic
to:

o---o---o---o---o  master
|\
| o'--o'--o'  topic
 \
  o---o---o---o---o  next

This is the reason why I mentioned reordering depending patches AB
 to BA (with manual conflict resolution) would be needed in darcs
 to support (I believe a better) alternative to git rebase.
I do not know how to do this in darcs (without doing manual addition
 of topic changes with gnu patch utility in a new darcs repository
 clone which would not have topic changes (and next changes as
 well) pulled in and throwing avay the old one at the end).


There is one thing that git rebase does easily (and correctly) that darcs
doesn't do nicely: rewriting history by merging commits prior to the
head.  I put prior in quotes, because darcs doesn't preserve history
in the first place.  I don't find that a compelling use, as opposed to
maintaing topic branches.


I do not know what you mean here. Can you point me to some example?

I hope that this is not too off-topic for haskell cafe ... and so far
 I believe this is not a flame war :-) I just like that Bertram's code
 exists and I think it (as well as git) should not be dismissed, since
 AFAIK there is more than performance to git as well as there is more
 to darcs than it not imposing patch order on us (which is the darcs
 feature I like).

Peter.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Laziness leaks

2008-06-04 Thread Jules Bean

Ronald Guida wrote:

[snip]


By default, a lazy language will procrastinate.  By default, a strict
language will anticrastinate.  Either way, I can waste resources by
blindly accepting the default time management plan.


Nice analysis.

Would you like to put that (the whole thing, not just that last para) on 
the wiki?


Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] problems with derive/TH

2008-06-04 Thread Simon Peyton-Jones
| World.hs:42:0:
|No instances for (Eq (a (M.Map String Player)),
|  Eq (a (M.Map ItemId Item)),
|  Eq (a (M.Map PlayerId Player)),
|  Eq (a (M.Map RoomId Room)),
|  Eq (a RoomId))
|  arising from the 'deriving' clause of a data type declaration
|   at World.hs:(42,0)-(50,14)

Yes, automatic 'deriving' in GHC became a little more restrictive in GHC 6.8 (I 
think), for very good reasons.

I think that's what your problem is.  The darcs repo 
http://cdsmith.twu.net/demos/mud seems to be offline today, so I can't look, 
but I think that's highly likely.  The solution is invariably to add a manual 
instance declaration, as you have done in your subsequent message.

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HDBC-odbc 1.1.4.4 fixes Windows linking problems

2008-06-04 Thread Greg Matheson
On Wed, 04 Jun 2008, Andrew Appleyard wrote:

 Prelude Database.HDBC Database.HDBC.ODBC handleSqlError $ conn - 
 connectODBC dictation
 interactive:1:0: Parse error in pattern

 Try:

   conn - handleSqlError $ connectODBC dictation

C:\cygwin\home\Administratordictation
dictation: user error (SQL error: SqlError {seState = [\IM002\,\01S00\], s
eNativeError = -1, seErrorMsg = connectODBC/sqlDriverConnect: [\0: [Microsoft]
[ODBC \\197X\\176\\202\\181{\\166\\161\\186\\222\\178z\\173\\251] \\167\\228\\16
4\\163\\168\\236\\184\\234\\174\\198\\168\\211\\183\\189\\166W\\186\\217\\165B\\
165\\188\\171\\252\\169w\\185w\\179]\\170\\186\\197X\\176\\202\\181{\\166\\161\
,\0: [Microsoft][ODBC \\197X\\176\\202\\181{\\166\\161\\186\\222\\178z\\173\\25
1] \\181L\\174\\196\\170\\186\\179s\\189u\\166r\\166\\234\\196\\221\\169\\202\]
})

ghc can't handle Chinese Big5, it appears, but HDBC also does appear
to be talking ODBC.

--
Dr Bean  If there is one thing we know after
 studying experts, it is that
 experts know more than we know.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Duncan Coutts

On Wed, 2008-06-04 at 09:32 +0200, Henning Thielemann wrote:
 On Tue, 3 Jun 2008, Don Stewart wrote:
 
  I wrote up the second part of the tour of understanding low level
  performance in GHC here,
 
  http://reddit.com/r/programming/info/6lx36/comments/
 
  Follows on from the discussion last week about various performance
  related things.
 
 Now the difficult question: How to write the 'mean' function in terms of
 'sum' and 'length' while getting the same performance?

There's another rather harder fusion transformation that notices when
two left folds are demanded in the same strictness context and they fold
over the same input list then they can be performed together.

sum= foldl (\s x - s + x) 0
length = foldl (\l x - l + 1) 0

mean xs = sum xs / length xs

So we must note that sum and length are demanded at the same time and
since they are both foldl's will consume the whole of xs.

So we can merge the two foldl's into one just by tupling them up:

sumlength = foldl (\(s, l) x - (s + x, l + 1)) (0, 0)

mean xs = s / l
  where (s, l) = sumlength xs

The Fortran people have been doing this kind of loop fusion for some
years. What makes it a bit harder for us is that we cannot do it with
rules because it's not a simple local transformation. It could probably
be done with a special compiler pass, though it'd need strictness
analysis to be done much earlier.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Loup Vaillant
[Forgot to post to the list, sorry]

2008/6/4 Duncan Coutts [EMAIL PROTECTED]:

 On Wed, 2008-06-04 at 09:32 +0200, Henning Thielemann wrote:
 On Tue, 3 Jun 2008, Don Stewart wrote:

  I wrote up the second part of the tour of understanding low level
  performance in GHC here,
 
  http://reddit.com/r/programming/info/6lx36/comments/
 
  Follows on from the discussion last week about various performance
  related things.

 Now the difficult question: How to write the 'mean' function in terms of
 'sum' and 'length' while getting the same performance?

 There's another rather harder fusion transformation that notices when
 two left folds are demanded in the same strictness context and they fold
 over the same input list then they can be performed together.

 sum= foldl (\s x - s + x) 0
 length = foldl (\l x - l + 1) 0

 mean xs = sum xs / length xs

 So we must note that sum and length are demanded at the same time and
 since they are both foldl's will consume the whole of xs.

 So we can merge the two foldl's into one just by tupling them up:

 sumlength = foldl (\(s, l) x - (s + x, l + 1)) (0, 0)

 mean xs = s / l
  where (s, l) = sumlength xs

I see a problem with this particular fusion, though: It changes the
space complexity of the program, from linear to constant. Therefore,
with some programs, relying on this optimization can be a matter of
correctness, not just performance. Therefore, if it is not clear when
the compiler will optimize this, I'd rather not use it. (A counter
example is tail calls, which are rather easily deducible, at least in
a strict context)

At least, with more simple fusions, the difference was between
stressing the GC or not. The space and time complexities of the
problem didn't change at all. Only the constants did.

Loup
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Ketil Malde

Henning Thielemann [EMAIL PROTECTED] writes:

 Now the difficult question: How to write the 'mean' function in terms of
 'sum' and 'length' while getting the same performance?

Write a RULE pragma converting

 \xs - (foldl' f y0 xs,foldl' g z0 xs) 

into

 \xs - foldl' (\(y,z) x - (f y x,g z x)) (y0,z0) xs

?

To actually work, it'd have to work for arbitrary top level function,
not just the (,) constructor.  No idea if it's feasible at all, of
course :-)

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Henning Thielemann


On Wed, 4 Jun 2008, Duncan Coutts wrote:


On Wed, 2008-06-04 at 09:32 +0200, Henning Thielemann wrote:


Now the difficult question: How to write the 'mean' function in terms of
'sum' and 'length' while getting the same performance?


There's another rather harder fusion transformation that notices when
two left folds are demanded in the same strictness context and they fold
over the same input list then they can be performed together.

sum= foldl (\s x - s + x) 0
length = foldl (\l x - l + 1) 0

mean xs = sum xs / length xs

So we must note that sum and length are demanded at the same time and
since they are both foldl's will consume the whole of xs.

So we can merge the two foldl's into one just by tupling them up:

sumlength = foldl (\(s, l) x - (s + x, l + 1)) (0, 0)

mean xs = s / l
 where (s, l) = sumlength xs



How about assisting the compiler with a helper function named 'parallel' ?

parallel :: ([a] - b, [a] - c) - [a] - (b,c)
parallel (f,g) xs = (f xs, g xs)

mean xs =
  uncurry (/) $ parallel (sum,length) xs


? We could state RULES in terms of 'parallel'. By calling 'parallel', the 
user tells, that he wants fusion here.


Say
  parallel/foldl/foldl forall f, g, x0, y0.
  parallel (foldl f x0, foldl g y0) = foldl (\(x,y) z - (f x z, g y z)) 
(x0,y0)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [ANNOUNCE] git-darcs-import 0.1

2008-06-04 Thread apfelmus

Peter Hercek wrote:

But what about this git rebasing option? How to do it more easily
 (than the solution I know and I described it later) in darcs?

using git-rebase --onto master next topic to get from:
o---o---o---o---o  master
 \
  o---o---o---o---o  next
   \
o---o---o  topic
to:

o---o---o---o---o  master
|\
| o'--o'--o'  topic
 \
  o---o---o---o---o  next

This is the reason why I mentioned reordering depending patches AB
 to BA (with manual conflict resolution) would be needed in darcs
 to support (I believe a better) alternative to git rebase.


I don't understand (probably because I haven't use either dvcs).

Either the changes in the  next-topic  path don't depend on the changes 
in the  fork-next  path. Then, the patches commute and it's no problem 
for darcs.


Or the  next-topic  path relies on features from  next  that are not 
present in  master . But then, you're screwed anyway and should merge 
some parts from  next  into  master  so as to advance the point where 
master  and  next  fork.


 o---o---o---o---o  master
  \
   x---x---o---o---o  next
\
 o---o---o  topic

(Of course, you don't actually advance the fork but rather add patches 
at the end of  master  . Hm, set of patches semantics seem to be a lot 
nicer here anyway. To me, the whole point of rebasing seems to be to 
somehow bring set semantics into the tree semantics.)



Regards,
apfelmus

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread apfelmus

Ronald Guida wrote:

So I just thought of something.  If laziness leads to laziness leaks,
then is there such a thing as a strictness leak?  I realized that the
answer is yes.

A lazy leak is a situation where I'm wasting resources to delay a
sequence of calculations instead of just doing them now.  But in a
strict language, I might waste resources to compute things that I'll
never need.  I would call that a strictness leak.

Now I could ask the dual question, How do I detect strictness leaks,
and I would probably get the same answers: profiling, looking at
object code, and being explicit about the evaluation strategy.

Both types of leaks share a lot in common.  In both cases, I'm wasting
resources.  If I have a real-time system, then either type of leak can
cause me to a miss a deadline.


I haven't heard the terms laziness leak and strictness leak before, 
imho they sound a bit spooky because it's not clear to me what the 
situation without leak would be. (Time vs Space? Is an O(n) algorithm a 
strictness leak compared to an O(log n) algorithm?)



Note that lazy evaluation never wastes time; evaluating a term with lazy 
evaluation will always take less reduction steps than doing so eagerly 
or partly eagerly. But it can waste space (- space leak), for 
instance by accumulating a big expression like


  (..) - ((..)+1) - (((..) + 1) + 1) - etc.

instead of evaluating  x+1  immediately

   5   - 6- 7- etc.

However, this would be wasted time in case the whole expression will not 
be evaluated but just thrown away. So, it's a trade-off.


The effect you have in mind only appears in real-time systems, where 
lazy evaluation procrastinates everything by default. So, trying to 
implement a real-time system in a lazy language is more or less a 
paradox :) as Okasaki already points out in his book.



Eager evaluation may waste both time and space compared to alternative 
course of reduction.



Regards,
apfelmus

PS: The reduction strategies we compare to don't evaluate under lambdas.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Luke Palmer
On Wed, Jun 4, 2008 at 9:48 AM, Loup Vaillant [EMAIL PROTECTED] wrote:
 I see a problem with this particular fusion, though: It changes the
 space complexity of the program, from linear to constant. Therefore,
 with some programs, relying on this optimization can be a matter of
 correctness, not just performance. Therefore, if it is not clear when
 the compiler will optimize this, I'd rather not use it. (A counter
 example is tail calls, which are rather easily deducible, at least in
 a strict context)

Haskell is not required to be lazy, only non-strict.  That is, Haskell
as a language is free to re-evaluate expressions bound in let clauses.
 This can change the time and space complexity of a program.

To me, time and space complexity is not about correctness but
performance.  Given unbounded time and space, you will still arrive at
the same result regardless of the complexity.  What makes the
asymptotic class more blessed than the associated constants?

However I still see your point.  If optimizations cannot be
guaranteed--the conditions under which they fire are brittle--then the
language can be yet harder to predict (which is not something Haskell
needs!).  It's hard to turn down an optimization which will
accidentally asymptotically improve your program, however.

I wonder what can be said about stable optimizations which are
insensitive to their environments in some sense.

Luke
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Simon Marlow

Claus Reinke wrote:

- i don't want to have to remove anything explicitly, becausethat 
would mean bypassing the haskell installation managers

- i would want to see a single haskell installation manager
   for each system,


I think that's fundamentally the wrong approach.  We shouldn't have to 
build a Haskell installation manager.  Would you also want installation 
managers for Perl, Python, Ruby, C, C++, etc. each with their own different 
user interfaces and feature sets?  I think not - you want a single package 
manager for the whole system from which you can install/uninstall libraries 
for any language.


This is something that Windows gets completely wrong.  Why do I have twelve 
icons in my status bar all representing little programs that are running in 
the background checking for updates to their own bits of software?  Why on 
earth do I have a Printer Driver Update Manager?  And I'd be wondering the 
same thing about a Haskell installation manager: installation and 
dependencies are not something specific to Haskell.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [ANNOUNCE] git-darcs-import 0.1

2008-06-04 Thread Aaron Denney
On 2008-06-04, Peter Hercek [EMAIL PROTECTED] wrote:
 But what about this git rebasing option? How to do it more easily
   (than the solution I know and I described it later) in darcs?

 using git-rebase --onto master next topic to get from:
  o---o---o---o---o  master
   \
o---o---o---o---o  next
 \
  o---o---o  topic
 to:

  o---o---o---o---o  master
  |\
  | o'--o'--o'  topic
   \
o---o---o---o---o  next

apfelmus answered this.  I might expand on his reply.

 There is one thing that git rebase does easily (and correctly) that darcs
 doesn't do nicely: rewriting history by merging commits prior to the
 head.  I put prior in quotes, because darcs doesn't preserve history
 in the first place.  I don't find that a compelling use, as opposed to
 maintaing topic branches.

 I do not know what you mean here. Can you point me to some example?

Letting capitals be commits, and lowercase be trees at the point of
these commits.

Suppose your history is:

A - B - C - D
||||
abcd

And that B somehow doesn't make sense except with the additional changes
in C.  You don't want to deal with this, or have anyone see B.  All it
does is clutter up the history.  So you want to expunged it from the
history.

git rebase can rewrite this to

A -- C' - D'
| | |
a c d

Doing this in darcs would require unrecording B and C, and then
rerecording C'.  But, if D is in the repo, then it is likely that B and
C can't be commuted past it to be unrecorded.  (If they can, no
problem!)

Unrecording D (and possible E, F, G, etc.) lets you do this, but if you
then pull it back from another repo, it will depend on B and C, and pull
these in, which are now doppelgangers of C'.  Not having used darcs 2,
I'm not sure if that's still quite so fatal, but it remains bad news
AIUI.

The bottom line is that darcs is a tool for managing sets of always
existing patches. and ordering them lazily, as needed.  In particular,
no history generally exists, unless each patch depends on exactly one
previous.  It has a differential view of software development, in that
the changes, and not the sum at each point matter (though of course, the
current sum does matter.)

On the other hand, git is a tool for managing (and munging) histories
of development in many weird and wacky ways.  It has an integral
view of software development, the changes are lazily derived from the
saved state at each point, and are strictly ordered even when they're
independent.  It can, when needed, work with these changes to accomplish
fairly interesting history-altering tasks, but as soon as they're used
to construct a new history, they're discarded.  (Yes, git uses deltas,
but this is merely an optimization.)

The two models are dual to each other in many ways.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [ANNOUNCE] git-darcs-import 0.1

2008-06-04 Thread Aaron Denney
On 2008-06-04, apfelmus [EMAIL PROTECTED] wrote:
 Peter Hercek wrote:
 But what about this git rebasing option? How to do it more easily
  (than the solution I know and I described it later) in darcs?
 
 using git-rebase --onto master next topic to get from:
 to:

 o---o---o---o---o  master
  \
   o---o---o---o---o  next
\
 o---o---o  topic
 
 o---o---o---o---o  master
 |\
 | o'--o'--o'  topic
  \
   o---o---o---o---o  next
 
 This is the reason why I mentioned reordering depending patches AB
  to BA (with manual conflict resolution) would be needed in darcs
  to support (I believe a better) alternative to git rebase.

 I don't understand (probably because I haven't use either dvcs).

 Either the changes in the  next-topic  path don't depend on the changes 
 in the  fork-next  path. Then, the patches commute and it's no problem 
 for darcs.

Right.  Then 

 o---o---o---o---o  master
  \
   o---o---o---o---o  next
\
 o---o---o  topic

is not a good model for what darcs has.  What it has is more like

 o---o---o---o---o  master
 |\
 | o---o---o---o---o  next
 \ |
  o---o---o+  topic

The patches in topic that are in next are indepent of the ones that
aren't in next, so it's another (virtual) line-of-development, that
darcs can lazily construct as needed.  These lines-of-development are
similar to branches of git that have been merged, but you also have
access to the unmerged versions until a patch comes in that depends on
the merger.

If I commit three new features that don't interact, a darcs repo will
essentially look like:

    topicA -
  / \
history --- topicB --+--
  \ /
    topicC -

Where the merger is virtual.  Darcs will implicitly linearize this to
any of 
  
history --- topicA --- topicB --- topicC ---
history --- topicA --- topicC --- topicB ---
history --- topicB --- topicA --- topicC ---
history --- topicB --- topicC --- topicA ---
history --- topicC --- topicA --- topicB ---
history --- topicC --- topicB --- topicA ---

/as needed/. git constructs one of these, based on how you did the
commits, and gives you ways to alter it to the others.

 Or the  next-topic  path relies on features from  next  that are not 
 present in  master . But then, you're screwed anyway

Yep.

 and should merge some parts from next into master so as to advance the
 point where master and next fork.

That's one solution.  Of course, darcs doesn't have semantic dependency,
but syntactic dependency.  (You can add extra dependencies to
model semantic dependencies, but you can't take away the syntactic
dependencies.)  Another solution, if there's syntactic,
but not semantic dependencies, is to manually use patch and diff to get
90% there, and then cleanup and record.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


RE: [Haskell-cafe] More on performance

2008-06-04 Thread Sittampalam, Ganesh
 I wonder what can be said about stable optimizations which are
insensitive to their environments in some sense.

http://citeseer.ist.psu.edu/veldhuizen02guaranteed.html

Ganesh

==
Please access the attached hyperlink for an important electronic communications 
disclaimer: 

http://www.credit-suisse.com/legal/en/disclaimer_email_ib.html
==

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Jon Harrop
On Wednesday 04 June 2008 11:05:52 Luke Palmer wrote:
 To me, time and space complexity is not about correctness but
 performance.

IRL the specification often dictates the complexity. If your code fails to 
satisfy the spec then it is wrong. Are you saying that Haskell code can never 
satisfy any such specification?

 Given unbounded time and space, you will still arrive at the same result
 regardless of the complexity. 

Given that the set of computers with unbounded time and space is empty, is it 
not fruitless to discuss its properties?

-- 
Dr Jon D Harrop, Flying Frog Consultancy Ltd.
http://www.ffconsultancy.com/products/?e
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Henning Thielemann


On Wed, 4 Jun 2008, Luke Palmer wrote:


On Wed, Jun 4, 2008 at 9:48 AM, Loup Vaillant [EMAIL PROTECTED] wrote:

I see a problem with this particular fusion, though: It changes the
space complexity of the program, from linear to constant. Therefore,
with some programs, relying on this optimization can be a matter of
correctness, not just performance. Therefore, if it is not clear when
the compiler will optimize this, I'd rather not use it. (A counter
example is tail calls, which are rather easily deducible, at least in
a strict context)


Haskell is not required to be lazy, only non-strict.  That is, Haskell
as a language is free to re-evaluate expressions bound in let clauses.
This can change the time and space complexity of a program.

To me, time and space complexity is not about correctness but
performance.  Given unbounded time and space, you will still arrive at
the same result regardless of the complexity.  What makes the
asymptotic class more blessed than the associated constants?


Is it possible to extend the garbage collector that way, that it does not 
only check whether references to a piece of data exist, but that it also 
tries to eliminate these references by further evaluations. Consider again


mean xs = sum xs / fromIntegral (length xs)

If 'sum' forces construction of xs, unevaluated 'length' still points to 
the first element of xs. Could the garbage collector start counting for 
'length' in order to free the first elements of xs?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Achim Schneider
Simon Marlow [EMAIL PROTECTED] wrote:

 Claus Reinke wrote:
 
  - i don't want to have to remove anything explicitly, because
  that would mean bypassing the haskell installation managers
  - i would want to see a single haskell installation manager
 for each system,
 
 I think that's fundamentally the wrong approach.  We shouldn't have
 to build a Haskell installation manager.  Would you also want
 installation managers for Perl, Python, Ruby, C, C++, etc. each with
 their own different user interfaces and feature sets?  I think not -
 you want a single package manager for the whole system from which you
 can install/uninstall libraries for any language.
 
 This is something that Windows gets completely wrong.  Why do I have
 twelve icons in my status bar all representing little programs that
 are running in the background checking for updates to their own bits
 of software?  Why on earth do I have a Printer Driver Update
 Manager?  And I'd be wondering the same thing about a Haskell
 installation manager: installation and dependencies are not
 something specific to Haskell.
 
Well, then there are developers who don't want to do .ebuilds, .rpms
for 20 distributions, .debs for 20 distributions, .cabs... Meaning that
if you have a project with 5 developers using 3 1/2 distributions, you
will have a hard time installing.

Haskell code tends to be platform unspecific, one shouldn't have to
write platform-specific installation code just to make users happy.

You have a point, though, and I wouldn't mind at all cabal-install
being integrated into portage, that is, make portage
_understand_ .cabal files and introduce another field in them that
specifies non-haskell (e.g. gtk) dependencies.

That is: I'd like to see a cabal-install for every system, using native
package management where possible. Aren't there any usable
third-party package managers for windoze? Maybe we can hook properly
into cygwin, though it's surely not meant to support non-binary
packages.

Maybe gentoo should start to do binary releases, too, superseding
debian and any other distribution.

On another approach vector, I wouldn't want to update my Eve client
using portage. It has to check whether there's a patch available before
it connects to the server, anyway, so it can as well update.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Ketil Malde
Achim Schneider [EMAIL PROTECTED] writes:

Caveat: I have only a vague grasp on what exactly is being criticized
here - using a modern Linux distribution, tons of packages are
available, and almost all issues Claus point out seem to be taken care
of - at least as far as I can see.

 Well, then there are developers who don't want to do .ebuilds, .rpms
 for 20 distributions, .debs for 20 distributions, .cabs... Meaning that
 if you have a project with 5 developers using 3 1/2 distributions, you
 will have a hard time installing.

I think you should either require your developers to use the system
that is provided to them, or be able and willing to maintain their own
system.  Most large Linux distributions seem to come with lots of
Haskell-related stuff nowadays - 139 packages on my Ubuntu install
(divide by something close to 3, as most library stuff comes in -dev,
-doc and -prof variants).

 You have a point, though, and I wouldn't mind at all cabal-install
 being integrated into portage,

I'm not too familiar with portage, but I think a better solution is to
provide tools to automatically generate packages for the various
systems.  How would you specify dependencies on non-haskell components
in a portable way?

 Aren't there any usable third-party package managers for windoze?

The most usable one I've seen is Steam from Valve, IIRC.  It'd be cool
if Haskell packages were provided this way.

 Maybe gentoo should start to do binary releases, too, superseding
 debian and any other distribution.

Yeah, that'll happen. :-)

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Duncan Coutts

On Wed, 2008-06-04 at 14:54 +0200, Ketil Malde wrote:

  You have a point, though, and I wouldn't mind at all cabal-install
  being integrated into portage,
 
 I'm not too familiar with portage, but I think a better solution is to
 provide tools to automatically generate packages for the various
 systems. 

Yeah, we've got one for gentoo/portage called hackport. That's why we
have so many packages in the haskell overlay. We didn't do them all
manually :-)

 How would you specify dependencies on non-haskell components in a
 portable way?

Unfortunately the same packages get named differently on different
package systems, eg zlib, libzlib, libzlib-dev etc. It's probably
possible to use reasonably standard names and map to the distro-names.
Doing that reliably is not going to be easy though.

Duncan



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



RE: [Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Re, Joseph (IT)
Not sure about it's current state, but a friend was working on this
until he graduated recently: http://www.acm.uiuc.edu/projects/Wipt

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Ketil Malde

 Aren't there any usable third-party package managers for windoze?

The most usable one I've seen is Steam from Valve, IIRC.  It'd be cool
if Haskell packages were provided this way.

-k


NOTICE: If received in error, please destroy and notify sender. Sender does not 
intend to waive confidentiality or privilege. Use of this email is prohibited 
when received in error.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Duncan Coutts

On Wed, 2008-06-04 at 11:33 +0100, Simon Marlow wrote:
 Claus Reinke wrote:
 
  - i don't want to have to remove anything explicitly, becausethat 
  would mean bypassing the haskell installation managers
  - i would want to see a single haskell installation manager
 for each system,
 
 I think that's fundamentally the wrong approach.  We shouldn't have to 
 build a Haskell installation manager.  Would you also want installation 
 managers for Perl, Python, Ruby, C, C++, etc. each with their own different 
 user interfaces and feature sets?  I think not - you want a single package 
 manager for the whole system from which you can install/uninstall libraries 
 for any language.

As I see it we need both. We need to make it easy to translate cabal
packages into distro packages. We do have tools to do that at the moment
for Gentoo, Debian and Fedora. I'm sure they could be improved.

However we cannot expect all distros (esp Windows) to have all packages
that are on hackage at all times. That's where it makes sense to have a
tool like cabal-install as a secondary package manager. There's also the
fact that most distro package managers do not handle unprivileged
per-user installations very well.

A further issue is that the dependencies that Haskell packages have are
pretty complex and more so than can actually be expressed in some distro
package systems. In particular Gentoo portage. Binary distros are better
off in that respect since the deps of binary haskell packages are
considerably simpler.

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Achim Schneider
Ketil Malde [EMAIL PROTECTED] wrote:

 Achim Schneider [EMAIL PROTECTED] writes:
 
 Caveat: I have only a vague grasp on what exactly is being criticized
 here - using a modern Linux distribution, tons of packages are
 available, and almost all issues Claus point out seem to be taken care
 of - at least as far as I can see.
 
  Well, then there are developers who don't want to do .ebuilds, .rpms
  for 20 distributions, .debs for 20 distributions, .cabs... Meaning
  that if you have a project with 5 developers using 3 1/2
  distributions, you will have a hard time installing.
 
 I think you should either require your developers to use the system
 that is provided to them, or be able and willing to maintain their own
 system.  Most large Linux distributions seem to come with lots of
 Haskell-related stuff nowadays - 139 packages on my Ubuntu install
 (divide by something close to 3, as most library stuff comes in -dev,
 -doc and -prof variants).
 
Well, you have a point but still don't have one. Many of gentoo's
haskell .ebuilds are seriously outdated, eg. wxhaskell still depends on
ghc 6.4. See Damnit, we need a CPAN

The haskell overlay features about 240 packages from alex to yi,
hackage currently lists 596 packages.

There are always things that a distribution doesn't include, especially
sparsely used special purpose software. Compiling a LADSPA plugin by
hand isn't that much of an issue, but you'll get into problems as soon
as you want your programs to find it without touching paths that only
your system's package manager should touch. I'm proud to say that my
current gentoo installation is still the first one, surviving several
world updates and at least 4 years of hacking around, using a lot of
unstable and masked packages.


  You have a point, though, and I wouldn't mind at all cabal-install
  being integrated into portage,
 
 I'm not too familiar with portage, but I think a better solution is to
 provide tools to automatically generate packages for the various
 systems.  How would you specify dependencies on non-haskell components
 in a portable way?
 
By using portage ;)

Seriously: Gentoo isn't a distribution, but a meta-distribution. It
wouldn't make much sense to support the generation of alien binary
packages, though, as dependency names will surely differ, and if you
have to generate the whole distribution, you can equally well just use
portage to install it.

Regarding non-haskell dependencies: It's already a problem from
distribution to distribution. In portage, you would have to depend on
e.g. either gtk+ or emul-linux-x86-gtklibs (if you want to build 32bit
software...), in debian on gtk-dev. Each distribution would have to make
a list on how cabal's non-haskell dependencies map to their own package
names, which is seriously less work than figuring these out by hand.


-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Installing Haskore under Windows

2008-06-04 Thread David
Hi!

I'm currently running ghc under Windows XP and want to play around
with Haskore, but I'm a little confused about how to install it, or if
it's even possible to use it with ghc. The 'readme' file contains this
note:

Note that the file ghc_add/IOExtensions.hs is a partial replacement of
a library file of the same name under Hugs.  It does not work yet on
Windows/GHC (where one has to distinguish between binary and text IO).

Does this mean I have to use Hugs rather than ghc? I have both Hugs
and ghc installed, but I'd prefer to use ghc if possible.

In either case, I find the install instructions (for Haskore, that is)
a little confusing, probably because I'm also new to Haskell. Any help
would be greatly appreciated.

Thanks.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: [ANNOUNCE] git-darcs-import 0.1

2008-06-04 Thread Peter Hercek

Aaron Denney wrote:

On 2008-06-04, apfelmus [EMAIL PROTECTED] wrote:

-- cut --
Or the  next-topic  path relies on features from  next  that are not 
present in  master . But then, you're screwed anyway


Yep.


Well not really, depends what kind the dependency is, this kind of rebase
 is useful when topic depends only syntactically (as you pointed later)
 on next or when the semantic dependency is only on a small part of next.
 Git rebase allows you get the syntax or the small part of semantics to the
 rebased topic by asking you for (manual) conflict resolution. This would
 correspond to commuting darcs patches which depend on each other (again
 possible by providing manual conflict resolution).
Of course this happens only when it was anticipated that upstream merge
 of next happens before topic, but then the upstream maintainers
 decided that topic should go upstream first. So, not often.



and should merge some parts from next into master so as to advance the
point where master and next fork.


That's one solution.  Of course, darcs doesn't have semantic dependency,
but syntactic dependency.  (You can add extra dependencies to
model semantic dependencies, but you can't take away the syntactic
dependencies.)  Another solution, if there's syntactic,
but not semantic dependencies, is to manually use patch and diff to get
90% there, and then cleanup and record.


OK, so I think this is what I expected for such a case.

Thanks for the explanation of the meaning of merging patches prior head.

Peter.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: library for drawing charts

2008-06-04 Thread Neal Alexander

Tim Docker wrote:

Peter wrote:

Has anyone got some code for drawing charts? I don't mean
graphs of functions, ala

http://dockerz.net/twd/HaskellCharts
...
I would like something that can generate PNGs in memory, i.e. not
directly to a file.


The library at the above URL supports a range of backends through the
(nice to use) cairo graphics API. In memory images are supported. Also,
it's by no means just graphs of functions - that just happens to be
several of the demos.


I'd like 2D pie charts, bar charts and something like a google-o-meter.


An the moment it does line charts and a few variants of these. I'll add
pie and bar charts when I need them - patches in the meantime gratefully
accepted.

Tim


I was using the HaskellCharts library and needed the same two things; so 
i rolled a quick and dirty pie chart generator (Barchart is on the TODO 
list).



http://72.167.145.184:8000/Screenshot.png
http://72.167.145.184:8000/PieChart.hs



x - widgetGetDrawWindow canvas
y - widgetGetSize canvas

renderWithDrawable x (f y)

where f = PieChart.graph title 0x548B54 [(50.0,A), (50.0,B)]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Installing Haskore under Windows

2008-06-04 Thread Henning Thielemann

On Wed, 4 Jun 2008, David wrote:

 I'm currently running ghc under Windows XP and want to play around
 with Haskore, but I'm a little confused about how to install it, or if
 it's even possible to use it with ghc. The 'readme' file contains this
 note:

You can also use Haskore with GHC. For an extended version using Cabal
see:
   http://darcs.haskell.org/haskore/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Ross Paterson
On Wed, Jun 04, 2008 at 02:22:07PM +0100, Duncan Coutts wrote:
 As I see it we need both. We need to make it easy to translate cabal
 packages into distro packages. We do have tools to do that at the moment
 for Gentoo, Debian and Fedora. I'm sure they could be improved.
 
 However we cannot expect all distros (esp Windows) to have all packages
 that are on hackage at all times. That's where it makes sense to have a
 tool like cabal-install as a secondary package manager. There's also the
 fact that most distro package managers do not handle unprivileged
 per-user installations very well.

cabal-install probably needs extra features to work in this way.
If you ask it to install a package and its prerequisites, each of
those packages will either be
(1) already installed,
(2) absent but available from the native package manager,
(3) available from hackage.
In the second case you'd probably want to break out of cabal-install to
install the pre-packaged ones before building the new ones.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


cabal and platform-independent haskell installation management (again) (Re: [Haskell-cafe] Re: Ubuntu and ghc)

2008-06-04 Thread Claus Reinke
I think that's fundamentally the wrong approach.  We shouldn't have to 
build a Haskell installation manager.  Would you also want installation 
managers for Perl, Python, Ruby, C, C++, etc. each with their own different 
user interfaces and feature sets?  I think not - you want a single package 
manager for the whole system from which you can install/uninstall libraries 
for any language.


This is something that Windows gets completely wrong.  Why do I have twelve 
icons in my status bar all representing little programs that are running in 
the background checking for updates to their own bits of software?  Why on 
earth do I have a Printer Driver Update Manager?  And I'd be wondering the 
same thing about a Haskell installation manager: installation and 
dependencies are not something specific to Haskell.


why then do we have ghc-pkg, or cabal? surely the native package
managers should handle all that?

there are (at least) two dimensions of uniformity: across different
kind of software on a single kind of system, and with a single kind 
of software across different kinds of system. platform-specific

package managers hide the software-specific notions of package
dependency maintainence, haskell-specific package managers
hide the platform-specific notions of package dependency 
maintainence. 

there is no need for platform- and haskell-specific tools to be 
entirely separate or in conflict with each other: where both exist, 
one can be a view on the other (if you are on linux-of-the-day, 
you can use its package manage, independent of whether your

packages are haskell or lisp; and if you are using haskell, you
should be able to use its package manager, independent of
whether you are on unix-variant-of-today or on 
unix-variant-of-yesterday).


there seems to be a lot of confusion here, some of us not 
understanding the issues because we happen to be using

systems where everything just works, others among us
not understanding the issues because we happen to be
using systems where such things would never work anyway,
and yet others insisting on i'll do it my way, so i know what
works (and then, of course, there are those who are
actively working on improving the situation who will see
my criticism as constructive, i hope!-).

1. there are no systems where packages just work!
   there are systems where a few people ensure that
   many people can live in such an illusion, though.

2. systems with native package manager software still
   need help from haskell-specific toolchains (unless
   you want the human package managers on those
   systems to code all haskell-specific dependencies
   by hand).

3. systems without native package managers (or perhaps
   i should say: systems on which users with unix background
   traditionally avoid getting acquainted with the details and 
   usage of whatever might pass as installation management 
   on those systems) are still in *very* wide-spread use, 
   and if haskell users on those systems are left out in the

   rain, haskell developers will not be able to support those
   systems. this limits the user and application base of haskell
   on those systems, making haskell less relevant than it could be.

4. haskell enables programming at a very high level of 
   abstraction, with fairly good support for mostly platform
   independent code. but that code needs to be installed, 
   and integrated with dependencies and dependents, and 
   the integrated haskell installations needs to be maintained.

   and that should just work, even if the developer is on
   (1;2) and the user is on (3), or vice versa, or if developers
   and users are on different flavours of (1;2) or (3).

with these clarifications out of the way, my interpretation 
of cabal was that it set out to provide two things


(A) a uniform platform-independent interface to such a 
   haskell package installation manager.

(B) a uniform platform-independent toolchain to support
   such a haskell package installation manager.

on systems in the (1;2) class, human package managers 
would use (B) to integrate haskell packages into the native

package management software, so users might never even
encounter cabal. even so, (A) might offer a haskell-specific
view on the general platform package management (when 
you want to see the haskell gui libs rather than all gui libs).


on systems in the (3) class, users and developers would
interface with (A/B) directly, for lack of a better alternative.

and developers/users in the (4) class would simply use
(A/B), without having to check whether they are real
or just a view on the platform-specific software. it is this
cross platform view of software development that i'm
most interested in (i'm one of those who use bash, vim,
opera, and haskell, no matter whether i'm on windows,
solaris, or whatever, and the cross-platform availability
of those tools has saved me many a headache;-).

returning to my earlier message, it seems that my 
concerns were mainly these:


- 

Re: [Haskell-cafe] Re: Ubuntu and ghc

2008-06-04 Thread Duncan Coutts

On Wed, 2008-06-04 at 15:25 +0200, Achim Schneider wrote:

 Well, you have a point but still don't have one. Many of gentoo's
 haskell .ebuilds are seriously outdated, eg. wxhaskell still depends on
 ghc 6.4. See Damnit, we need a CPAN
 
 The haskell overlay features about 240 packages from alex to yi,
 hackage currently lists 596 packages.

You may be interested to know I just tried to install 564 packages from
hackage using cabal-install (others excluded due to missing or
inconsistent dependencies). Of those, 156 failed to install (direct
failure or because they depended on another package that failed). I now
have 157 programs installed in ~/.cabal/bin and ghc-pkg take two and a
half seconds to tell me that I have 454 libraries installed (some were
installed previously via portage).

Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: cabal and platform-independent haskell installation management (again) (Re: [Haskell-cafe] Re: Ubuntu and ghc)

2008-06-04 Thread Duncan Coutts

On Wed, 2008-06-04 at 15:14 +0100, Claus Reinke wrote:
  I think that's fundamentally the wrong approach.  We shouldn't have to 
  build a Haskell installation manager.  Would you also want installation 
  managers for Perl, Python, Ruby, C, C++, etc. each with their own different 
  user interfaces and feature sets?  I think not - you want a single package 
  manager for the whole system from which you can install/uninstall libraries 
  for any language.
  
  This is something that Windows gets completely wrong.  Why do I have twelve 
  icons in my status bar all representing little programs that are running in 
  the background checking for updates to their own bits of software?  Why on 
  earth do I have a Printer Driver Update Manager?  And I'd be wondering the 
  same thing about a Haskell installation manager: installation and 
  dependencies are not something specific to Haskell.
 
 why then do we have ghc-pkg, or cabal? surely the native package
 managers should handle all that?
 
 there are (at least) two dimensions of uniformity: across different
 kind of software on a single kind of system, and with a single kind 
 of software across different kinds of system. platform-specific
 package managers hide the software-specific notions of package
 dependency maintainence, haskell-specific package managers
 hide the platform-specific notions of package dependency 
 maintainence. 
 
 there is no need for platform- and haskell-specific tools to be 
 entirely separate or in conflict with each other: where both exist, 
 one can be a view on the other (if you are on linux-of-the-day, 
 you can use its package manage, independent of whether your
 packages are haskell or lisp; and if you are using haskell, you
 should be able to use its package manager, independent of
 whether you are on unix-variant-of-today or on 
 unix-variant-of-yesterday).
 
 there seems to be a lot of confusion here, some of us not 
 understanding the issues because we happen to be using
 systems where everything just works, others among us
 not understanding the issues because we happen to be
 using systems where such things would never work anyway,
 and yet others insisting on i'll do it my way, so i know what
 works (and then, of course, there are those who are
 actively working on improving the situation who will see
 my criticism as constructive, i hope!-).
 
 1. there are no systems where packages just work!
 there are systems where a few people ensure that
 many people can live in such an illusion, though.

Yes indeed! :-)

 2. systems with native package manager software still
 need help from haskell-specific toolchains (unless
 you want the human package managers on those
 systems to code all haskell-specific dependencies
 by hand).

Yes. As an illustration: gentoo has an haskell-cabal.eclass that
interfaces between ebuilds and cabal as the build manager and there is a
tool to generate ebuilds that use the haskell-cabal.eclass from .cabal
descriptions (so we get correct deps automatically).

 3. systems without native package managers (or perhaps
 i should say: systems on which users with unix background
 traditionally avoid getting acquainted with the details and 
 usage of whatever might pass as installation management 
 on those systems) are still in *very* wide-spread use, 
 and if haskell users on those systems are left out in the
 rain, haskell developers will not be able to support those
 systems. this limits the user and application base of haskell
 on those systems, making haskell less relevant than it could be.

Eg Windows, OSX, Solaris.

 4. haskell enables programming at a very high level of 
 abstraction, with fairly good support for mostly platform
 independent code. but that code needs to be installed, 
 and integrated with dependencies and dependents, and 
 the integrated haskell installations needs to be maintained.
 and that should just work, even if the developer is on
 (1;2) and the user is on (3), or vice versa, or if developers
 and users are on different flavours of (1;2) or (3).
 
 with these clarifications out of the way, my interpretation 
 of cabal was that it set out to provide two things
 
 (A) a uniform platform-independent interface to such a 
 haskell package installation manager.
 (B) a uniform platform-independent toolchain to support
 such a haskell package installation manager.

I guess so.

 on systems in the (1;2) class, human package managers 
 would use (B) to integrate haskell packages into the native
 package management software, so users might never even
 encounter cabal.

As in my example with the gentoo haskell packages above.

  even so, (A) might offer a haskell-specific
 view on the general platform package management (when 
 you want to see the haskell gui libs rather than all gui libs).
 
 on systems in the (3) class, users and developers would
 interface with (A/B) directly, for lack 

Re: cabal and platform-independent haskell installation management (again) (Re: [Haskell-cafe] Re: Ubuntu and ghc)

2008-06-04 Thread Darrin Thompson
On Wed, Jun 4, 2008 at 10:14 AM, Claus Reinke [EMAIL PROTECTED] wrote:
 - it isn't sufficient to worry about installation management,
   one has to worry about integration, lifetime and uninstall
   management as well. in short, maintain the dependency
   graphs over any of install/upgrade/uninstall.


It's sufficient to worry about the problem at hand. Everything you've
mentioned can be done incrementally once the depsolver is happy.

After that it's quite likely that the set of people who care about
lifetimes and integration stuff will intersect with the people who
want to work on said stuff.

--
Darrin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: cabal and platform-independent haskell installation management (again) (Re: Re: Ubuntu and ghc)

2008-06-04 Thread Achim Schneider
Duncan Coutts [EMAIL PROTECTED] wrote:

 
 On Wed, 2008-06-04 at 15:14 +0100, Claus Reinke wrote:
   I think that's fundamentally the wrong approach.  We shouldn't
   have to build a Haskell installation manager.  Would you also
   want installation managers for Perl, Python, Ruby, C, C++, etc.
   each with their own different user interfaces and feature sets?
   I think not - you want a single package manager for the whole
   system from which you can install/uninstall libraries for any
   language.
   
   This is something that Windows gets completely wrong.  Why do I
   have twelve icons in my status bar all representing little
   programs that are running in the background checking for updates
   to their own bits of software?  Why on earth do I have a Printer
   Driver Update Manager?  And I'd be wondering the same thing about
   a Haskell installation manager: installation and dependencies
   are not something specific to Haskell.
  
  why then do we have ghc-pkg, or cabal? surely the native package
  managers should handle all that?
  
  there are (at least) two dimensions of uniformity: across different
  kind of software on a single kind of system, and with a single kind 
  of software across different kinds of system. platform-specific
  package managers hide the software-specific notions of package
  dependency maintainence, haskell-specific package managers
  hide the platform-specific notions of package dependency 
  maintainence. 
  
  there is no need for platform- and haskell-specific tools to be 
  entirely separate or in conflict with each other: where both exist, 
  one can be a view on the other (if you are on linux-of-the-day, 
  you can use its package manage, independent of whether your
  packages are haskell or lisp; and if you are using haskell, you
  should be able to use its package manager, independent of
  whether you are on unix-variant-of-today or on 
  unix-variant-of-yesterday).
  
  there seems to be a lot of confusion here, some of us not 
  understanding the issues because we happen to be using
  systems where everything just works, others among us
  not understanding the issues because we happen to be
  using systems where such things would never work anyway,
  and yet others insisting on i'll do it my way, so i know what
  works (and then, of course, there are those who are
  actively working on improving the situation who will see
  my criticism as constructive, i hope!-).
  
  1. there are no systems where packages just work!
  there are systems where a few people ensure that
  many people can live in such an illusion, though.
 
 Yes indeed! :-)
 
  2. systems with native package manager software still
  need help from haskell-specific toolchains (unless
  you want the human package managers on those
  systems to code all haskell-specific dependencies
  by hand).
 
 Yes. As an illustration: gentoo has an haskell-cabal.eclass that
 interfaces between ebuilds and cabal as the build manager and there
 is a tool to generate ebuilds that use the haskell-cabal.eclass
 from .cabal descriptions (so we get correct deps automatically).
 
  3. systems without native package managers (or perhaps
  i should say: systems on which users with unix background
  traditionally avoid getting acquainted with the details and 
  usage of whatever might pass as installation management 
  on those systems) are still in *very* wide-spread use, 
  and if haskell users on those systems are left out in the
  rain, haskell developers will not be able to support those
  systems. this limits the user and application base of haskell
  on those systems, making haskell less relevant than it could be.
 
 Eg Windows, OSX, Solaris.
 
  4. haskell enables programming at a very high level of 
  abstraction, with fairly good support for mostly platform
  independent code. but that code needs to be installed, 
  and integrated with dependencies and dependents, and 
  the integrated haskell installations needs to be maintained.
  and that should just work, even if the developer is on
  (1;2) and the user is on (3), or vice versa, or if developers
  and users are on different flavours of (1;2) or (3).
  
  with these clarifications out of the way, my interpretation 
  of cabal was that it set out to provide two things
  
  (A) a uniform platform-independent interface to such a 
  haskell package installation manager.
  (B) a uniform platform-independent toolchain to support
  such a haskell package installation manager.
 
 I guess so.
 
  on systems in the (1;2) class, human package managers 
  would use (B) to integrate haskell packages into the native
  package management software, so users might never even
  encounter cabal.
 
 As in my example with the gentoo haskell packages above.
 
   even so, (A) might offer a haskell-specific
  view on the general platform package management (when 
  you want to see the 

Re: [Haskell-cafe] What is the maturity of Haskell Web Frameworks

2008-06-04 Thread John Goerzen
Duncan Coutts wrote:
 On Tue, 2008-06-03 at 10:23 -0700, Don Stewart wrote:
 A new version of happs was written on a Monday a couple of months ago,
 using fastcgi and takusen. We're running it at galois, and you can
 find the code on code.haskell.org/hpaste. So not quite what you wanted,
 but another data point.
 
 That's very interesting. I hope we will also see a version with the same
 feature set implemented with the latest HAppS. It would give an
 interesting comparison of the web frameworks to see the same app
 implemented in both.

I haven't looked at that particular version, but when I last looked at
web frameworks, I was somewhat disappointed.  HAppS seemed to have
little documentation on the current version anyhow, and especially
little coverage on what to do if your app revolved around serving data
from an existing SQL database or other data source rather than its own.

hvac sounds interesting but at that time at least it was not clear
whether it was stable or would continue to be maintained.

xhtml and HStringTemplate were overkill for what I wanted, so I wound up
just using the FastCGI and CGI toolkits themselves.  They are
surprisingly nice, and with a little bit of wrappers around them for
things like validating forms, have worked exceptionally well.

My needs for that project were not complex, the layout was not very
important, and the presentation never changes (only the business logic).
 So I understand that my needs may have been opposite from what most
people face.

-- John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HDBC-odbc 1.1.4.4 fixes Windows linking problems

2008-06-04 Thread John Goerzen
Greg Matheson wrote:
 On Tue, 03 Jun 2008, John Goerzen wrote:
 
 Hi,
 
 HDBC-odbc version 1.1.4.4 has been uploaded to Hackage.  It fixes the
 problems some here have encountered regarding ODBC crashes or other
 similar odd behavior on Windows. 
 
 I'm getting a 'Parse error in pattern' error.
 
 C:\Documents and Settings\Administratorghci
 GHCi, version 6.8.2: http://www.haskell.org/ghc/  :? for help
 Loading package base ... linking ... done.
 Prelude :m Database.HDBC.ODBC Database.HDBC
 Prelude Database.HDBC Database.HDBC.ODBC conn - connectODBC dictation

Please note that this is not a valid connection string.

The HDBC-odbc API docs are available at
http://software.complete.org/static/hdbc-odbc/doc//HDBC-odbc/Database-HDBC-ODBC.html
and give the example string of:

DSN=hdbctest1

as well as a link to the Microsoft document that describes the string,
which lives at:

http://msdn.microsoft.com/en-us/library/ms715433(VS.85).aspx

That URL also contains error message descriptions.  For your state
01S00, it lists:

  Invalid connection string attribute

  An invalid attribute keyword was specified in the connection string
  (InConnectionString), but the driver was able to connect to the data
  source anyway. (Function returns SQL_SUCCESS_WITH_INFO.)

Your fix may be as simple as prepending the string with DSN=.

-- John

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] [ANNOUNCE] git-darcs-import 0.1

2008-06-04 Thread John Goerzen
Loup Vaillant wrote:
 2008/6/3 Darrin Thompson [EMAIL PROTECTED]:
 On Sun, Jun 1, 2008 at 2:44 PM, Bertram Felgenhauer
 [EMAIL PROTECTED] wrote:
 Hi,

 I'm pleased to announce yet another tool for importing darcs repositories
 to git. Unlike darcs2git [1] and darcs-to-git [2], it's written in
 Haskell, on top of the darcs2 source code. The result is a much faster
 program - it can convert the complete ghc 6.9 branch (without libraries)
 in less than 15 minutes on my slightly dated machine (Athlon XP 2500+),
 which is quite fast [3]. Incremental updates work, too.

 What's the appeal of this? I personally love git, but I thought all
 the cool kids at this school used darcs and that was that.
 
 Disclaimer: I'm no expert, this is what I've heard. Anyone please
 confirm or deny the following?

I've never been a cool kid at school, but I switched from Darcs to Git
recently.  I have not regretted it.  Git has quite a few features Darcs
doesn't by now, and there is a little bit (but not much) in the other
direction.  That and the lack of the indempotent merge bug.

Git's interface has really cleaned up in the last year, and it seems to
be well on the way to becoming the defacto DVCS of choice.  Maybe next
week, when it's picked up the last of the superdelegates, we can say for
sure, but of course bzr won't conceed anything at this point

(OK, so we've had mind-numbing election coverage here in the US for too
long)

I've blogged about this.  http://changelog.complete.org/plugin/tag/git
will get you most of the relevant posts.

-- John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ICFP2008 Call for Poster proposals

2008-06-04 Thread Matthew Fluet (ICFP Publicity Chair)
ICFP 2008 poster session
September 21, 2008
Call for presentation proposals

ICFP 2008 will feature a poster session for researchers and
practitioners, including students.  The session will provide friendly
feedback for work that is in gestation or ongoing, as well as
opportunities to meet each other and exchange ideas.  We welcome poster
submissions on all ICFP topics, especially presentations of

  - applications of and to functional programming;
  - recent work presented at more distant venues; and
  - ongoing work, whether or not submitted to ICFP.

There will be no formal proceedings, but presenters will be invited to
submit working notes, demo code, and other materials to supplement their
abstract and poster.  These materials will be released informally on a
Web page dedicated to the poster session.  An accepted submission is not
intended to replace conference or journal publication.

Persons interested in presenting a poster are invited to submit a
one-page abstract in SIGPLAN conference style
http://www.acm.org/sigs/sigplan/authorInformation.htm
to the Web site
https://www.softconf.com/s08/icfp08-posters/submit.html
by June 30, 2008.  The program committee will review the submissions
for relevance and interest, and notify the authors by July 14, 2008.
Accepted posters must be presented by the authors in person on Sunday,
September 21, 2008.

Important dates:
Submission: Monday, June 30, 2008
Notification: Monday, July 14, 2008
Presentation: Sunday, September 21, 2008

Program committee:
Benjamin Pierce (University of Pennsylvania)
Colin Runciman (University of York)
Chung-chieh Shan (Rutgers University)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] [ANN] hs-pgms 0.1 -- Programmer's Minesweeper in Haskell

2008-06-04 Thread Bertram Felgenhauer
Hi,

I've just uploaded hs-pgms to hackage. It is a Haskell implementation
of Programmer's Minesweeper [1], which allows programmers to implement
minesweeper strategies and run them. (Note: ghc = 6.8 is required.)

hs-pgms uses MonadPrompt to achieve a clean separation between
strategies, game logic, and presentation. There are two frontends,
one command line frontend which is mainly useful for collecting
statistics, and a GUI frontend, using gtk2hs, for watching the
strategies in action.

There's a git repo, see http://repo.or.cz/w/hs-pgms.git .

enjoy,

Bertram

[1] http://www.ccs.neu.edu/home/ramsdell/pgms/index.html
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Albert Y. C. Lai

Jon Harrop wrote:
IRL the specification often dictates the complexity. If your code fails to 
satisfy the spec then it is wrong. Are you saying that Haskell code can never 
satisfy any such specification?


In addition to RL, it it should and it can in theory too:

http://www.cs.toronto.edu/~hehner/aPToP/  in particular chapter 4 
sections 4.2 and 4.3.


I also admire that Musser and Saini give asymptotic costs as an 
indispensible part of the STL specification. Everyone should at least do 
that much.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread Albert Y. C. Lai

apfelmus wrote:
I haven't heard the terms laziness leak and strictness leak before, 
imho they sound a bit spooky because it's not clear to me what the 
situation without leak would be. (Time vs Space? Is an O(n) algorithm a 
strictness leak compared to an O(log n) algorithm?)


Leak refers to a surprise. You didn't expect Firefox to use Omega(n) 
memory where n is the number of times you refresh a rather plain static 
HTML page, but it does, and you call it a memory leak. You didn't 
expect foldl to lump a big thunk, but it does, and you call it a lazy 
leak. Therefore leak refers to a program failing your expectation - 
even if you yourself wrote the program.


(Leak, bug, issue... We surely are very creative in how to avoid 
calling a shovel a shovel, or an error an error.)


The solution is better education, better reasoning, and Intelligent 
Design. As you write every line of code, you should already know what to 
expect. No magic, no surprise, just science.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is the maturity of Haskell Web Frameworks

2008-06-04 Thread Sterling Clover


hvac sounds interesting but at that time at least it was not clear
whether it was stable or would continue to be maintained.

xhtml and HStringTemplate were overkill for what I wanted, so I  
wound up

just using the FastCGI and CGI toolkits themselves.  They are
surprisingly nice, and with a little bit of wrappers around them for
things like validating forms, have worked exceptionally well.



I'm in a position now where I can make a reasonable promise that hvac  
will not only be maintained but undergoing continued development. I  
just, for example, hooked in some basic postgresql support yesterday  
(although the changes are not tested/pushed to the repo yet).  
(Speaking of which, I noticed that the HDBC postgresql bindings don't  
set seState on an error -- is this intentional, or something that  
should be fixed up?)


As such though, I also can't promise that hvac as it stands is  
officially stable, although most work I imagine that will be done on  
it will consist of extensions rather than API-breaking changes.


I've also been working on a lightweight testing API for programs  
using the CGI monad, integrated with quickcheck. Although this  
project isn't final yet either, there's a working repo at http:// 
code.haskell.org/~sclv/cgicheck/ if anyone wants to play with it/use  
it. I plan to use this quite a bit to test hvac, and any applications  
produced using it.


Additionally, the hvac code now has a nice, though also incomplete  
orm/dsl-type library for database access (also built on top of HDBC).  
When I feel more confident/polished in the code there, I plan to  
split it out and hackage it as well.


Of course, folks with a little time/inclination who play with these  
things a bit and put them through the paces are a big help in working  
out the kinks such that I feel they're closer to release- 
quality (even if that release *is* only 0.1). :-)


Relatedly, I'd be very interested in developing a single common  
library for encoding/escaping/decoding/unescaping of common web  
formats (urls, javascript, basic xss escaping, rss-valid character  
escapes to html-valid ones, etc.) that uses a lightweight invertible  
combinator approach as described in Pierce's work on lenses (various  
papers at http://www.seas.upenn.edu/~harmony/) such that the encode/ 
decode methods are correct-by-construction bijections. There are  
various encoding bits scattered among the Haskell web libraries at  
the moment, each with varying degrees of correctness and conformance.  
It would be nice to direct energy here to a single centralized  
project, which would have some upfront architecture, and then,  
unfortunately, probably no small degree of spec-translation. If  
anyone else is interested in working on such a thing, I'd be  
delighted for ideas/collaboration (or better yet, if someone just  
picked up the idea and ran with it themselves!) (hmm... maybe galois  
has some internal libraries it wouldn't mind sharing as a partial  
basis?)


Regards,
Sterl.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] More on performance

2008-06-04 Thread Sterling Clover

On Jun 4, 2008, at 5:51 AM, Henning Thielemann wrote:
How about assisting the compiler with a helper function named  
'parallel' ?


parallel :: ([a] - b, [a] - c) - [a] - (b,c)
parallel (f,g) xs = (f xs, g xs)

mean xs =
  uncurry (/) $ parallel (sum,length) xs


? We could state RULES in terms of 'parallel'. By calling  
'parallel', the user tells, that he wants fusion here.


Say
  parallel/foldl/foldl forall f, g, x0, y0.
  parallel (foldl f x0, foldl g y0) = foldl (\(x,y) z - (f x  
z, g y z)) (x0,y0)




Well, we already have . Would a sufficiently specialized rule over  
that be a useful addition to Control.Arrow?


Regards,
Sterl.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is the maturity of Haskell Web Frameworks

2008-06-04 Thread Don Stewart
s.clover:
 
 hvac sounds interesting but at that time at least it was not clear
 whether it was stable or would continue to be maintained.
 
 xhtml and HStringTemplate were overkill for what I wanted, so I  
 wound up
 just using the FastCGI and CGI toolkits themselves.  They are
 surprisingly nice, and with a little bit of wrappers around them for
 things like validating forms, have worked exceptionally well.
 
 
 I'm in a position now where I can make a reasonable promise that hvac  
 will not only be maintained but undergoing continued development. I  
 just, for example, hooked in some basic postgresql support yesterday  
 (although the changes are not tested/pushed to the repo yet).  
 (Speaking of which, I noticed that the HDBC postgresql bindings don't  
 set seState on an error -- is this intentional, or something that  
 should be fixed up?)

Sterling,

Would you like to add some text on hvac and your other web libs
to our web programming wiki/faq,

http://haskell.org/haskellwiki/Practical_web_programming_in_Haskell

Ideally, in a few months time, with enough additions, the full 
story will be covered on this wiki page, and it will be a great
boon to developers wanting to work in Haskell, in this hot area.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] How would you hack it?

2008-06-04 Thread Andrew Coppin
So anyway, today I found myself wanting to build a program that 
generates some test data. I managed to make it *work* without too much 
difficulty, but it wasn't exactly elegant. I'm curios to know how 
higher-order minds would approach this problem. It's not an especially 
hard problem, and I'm sure there are several good solutions possible.


I have a file that contains several thousand words, seperated by white 
space. [I gather that on Unix there's a standard location for this 
file?] I want to end up with a file that contains a randomly-chosen 
selection of words. Actually, I'd like the end result to be a LaTeX 
file, containing sections, subsections, paragraphs and sentences. 
(Although obviously the actual sentences will be gibberish.) I'd like to 
be able to select how big the file should be, to within a few dozen 
characters anyway. Exact precision is not required.


How would you do this?

The approach I came up with is to slurp up the words like so:

 raw - readFile words.txt
 let ws = words raw
 let n = length ws
 let wa = listArray (1,n) ws

(I actually used lazy ByteStrings of characters.) So now I have an array 
of packed ByteStrings, and I can pick array indexes at random and use 
unwords to build my gibberish sentences.


The fun part is making a sentence come out the right size. There are two 
obvious possibilities:
- Assume that all words are approximately N characters long, and 
estimate how many words a sentence therefore needs to contain to have 
the required length.
- Start with an empty list and *actually count* the characters as you 
add each word. (You can prepend them rather than append them for extra 
efficiency.)

I ended up taking the latter approach - at least at the sentence level.

What I actually did was to write a function that builds lists of words. 
There is then another function that builds several sublists of 
approximately the prescribed lengths, inserts some commas, capitalises 
the first letter and appends a fullstop. This therefore generates a 
sentence. After that, there's a function that builds several sentences 
of random size with random numbers of commas and makes a paragraph out 
of them. Next a function gathers several paragraphs and inserts a 
randomly-generated subsection heading. A similar function takes several 
subsections and adds a random section heading.


In my current implementation, all of this is in the IO monad (so I can 
pick things randomly). Also, the whole thing looks very repetative and 
really if I thought about it harder, it ought to be possible to factor 
it out into something smaller and more ellegant. The clause function 
builds clauses of approximately the right size, and each function 
above that (sentence, paragraph, subsection, section) becomes 
progressively less accurate in its sizing. On the other hand, the final 
generated text always for example has 4 subsections to each section, and 
they always look visually the same size. I'd like to make it more 
random, but all the code works by estimating roughly how big a 
particular construct is, and therefore how many of then are required to 
full N characters. For larger and larger N, actually counting this stuff 
would seem somewhat inefficient, so I'm estimating. But that makes it 
hard to add more randomness without losing overall size control.


The final file can end up being a fair way different in size to what I 
requested, and has an annoyingly regular internal structure. But 
essentially, it works.


It would be nice to modify the code to generate HTML - but since it's 
coded so simple-mindedly, that would be a copy  paste job.


Clearly, what I *should* have done is think more about a good 
abstraction before writing miles of code. ;-) So how would you guys do this?


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] very hard to build darcs with win32 ghc-6.8.2!

2008-06-04 Thread Tim Newsham

I built darcs for win32 recently and it was much more difficult than
it should be.  Probably most of the blame goes to ghc-6.8.2 binary
release for win32.  Half of the effort is getting the zlib prereq
working.

Previously to build zlib for win32 ghc I did the following:
http://www.haskell.org/pipermail/haskell-cafe/2007-March/023059.html

However, now, the gcc binary for ghc-6.8.2 does not work as well as
I would like which requires a bit more effort:

- It did not automatically add mingw headers during compilation, and
  to fix this I had to force some extra flags.  I built a shell script
  xcc:

  #!/bin/sh
  GHC=/ghc/ghc-6.8.2
  PATH=/c$GHC/gcc-lib:$PATH
  export PATH
  /c$GHC/gcc -I $GHC/include -I $GHC/include/mingw \
 -I $GHC/gcc-lib/include -L $GHC/gcc-lib $@

- During linking it did not find crt2.o and adding it to PATH and
  -L did not help, so I just copied wholesale all of
  /c/ghc/ghc-6.8.2/gcc-lib into my zlib source directory.

- At this point I was able to build with:
  CC=./xcc ./configure --prefix=/c/ghc/ghc-6.8.2 \
   --libdir=/c/ghc/ghc-6.8.2
  make
  make install

Why is the gcc in ghc's directory so non-functional?

Ok, so with zlib (the C library) installed, the zlib haskell package
installs properly.  Dependency resolved.

Next, to build darcs I had to configure it without libcurl support
(or alternately spend time chasing down that dep, pass for now). 
Using cygwin I ran ./configure which falsely uses the cygwin gcc

for configuration checks, but since gcc is never used directly during
the compilation process, that doesn't matter that much.  I could have
tried the xcc trick here again, but didn't bother.  I ran into
two problems during the build:

- -Werror is specified in the GNUMakefile and there are many warnings.
  I just removed -Werror for now.

- During linking it was not able to resolve SleepEx from
  src/win32/System/Posix.hs.  I could not figure out what is going
  on here.  I tried adding -lkernel32 and -L /ghc/ghc-6.8.2/gcc-lib
  -Lkernel32 to the Makefile and it still did not work even though
  /ghc/ghc-6.8.2/gcc-lib/libkernel32.a has [EMAIL PROTECTED] defined(!)
  Finally I bit the bullet and hacked around this by noticing that
  mingw headers have _sleep() defined.  I replaced the code in Posix.hs
  with:

  foreign import ccall _sleep c_sleep :: CULong - IO ()

  sleep :: Integer - IO CInt
  sleep n = c_sleep (fromIntegral n)  return (toEnum $ fromIntegral n)

At this point darcs builds and the binary seems to work (so far).
I don't know the implication of my sleep hack (which doesn't return
the actual time slept).

Here's a small test program which uses FFI to SleepEx which I was
not able to get working with win32 ghc-6.8.2.

--
{-# OPTIONS -fglasgow-exts -fffi #-}
module Main where
import Foreign.C.Types

foreign import ccall SleepEx c_SleepEx :: CUInt - CInt - IO CInt

main = do
putStrLn start
n - c_SleepEx (2*1000) 1
print n
---

So, what is going on with ghc-6.8.2?  Why is the gcc so hard to use
now?  Why can't I get FFI working with standard win32 functions?
Why aren't there prebuilt win32 darcs binaries anymore?

Tim Newsham
http://www.thenewsh.com/~newsham/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Bulat Ziganshin
Hello Andrew,

Wednesday, June 4, 2008, 10:33:00 PM, you wrote:
 I have a file that contains several thousand words, seperated by white
 space. [I gather that on Unix there's a standard location for this 
 file?] I want to end up with a file that contains a randomly-chosen 
 selection of words.

does this letter was generated automatically? :)  good work! :)))


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Gregory Collins
Andrew Coppin [EMAIL PROTECTED] writes:

 Clearly, what I *should* have done is think more about a good
 abstraction before writing miles of code. ;-) So how would you guys do
 this?

If you want text that roughly resembles English, you're better off
getting a corpus of real English text and running it through a Markov
chain. Mark Dominus has written a few blog posts about this topic
recently, see http://blog.plover.com/lang/finnpar.html.

G.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: How would you hack it?

2008-06-04 Thread Achim Schneider
Andrew Coppin [EMAIL PROTECTED] wrote:

 I have a file that contains several thousand words, seperated by
 white space. [I gather that on Unix there's a standard location for
 this file?]

Looking at /usr/share/dict/words, I'm assured that the proper seperator
is \n.

 Clearly, what I *should* have done is think more about a good 
 abstraction before writing miles of code. ;-) So how would you guys
 do this?

Generate a Map Int [String] map, with the latter list being an infinite
list of words with that particular size.

Now assume that you want to have a 100 character sentence. You start by
looking if you got any 100 character word, if yes it's your sentence,
if not you divide it in half (maybe offset by a weighted random
factor [1]) and start over again.

You can then specify your whole document along the lines of

(capitalise $ words 100) ++ .  ++ (capitalise $ words 10) ++ ? ++
(capitalise $ words 20) ++ oneone1! 

[1] Random midpoint displacement is a very interesting topic by itself.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is the maturity of Haskell Web Frameworks

2008-06-04 Thread John Goerzen
Sterling Clover wrote:
 hvac sounds interesting but at that time at least it was not clear
 whether it was stable or would continue to be maintained.

 xhtml and HStringTemplate were overkill for what I wanted, so I  
 wound up
 just using the FastCGI and CGI toolkits themselves.  They are
 surprisingly nice, and with a little bit of wrappers around them for
 things like validating forms, have worked exceptionally well.

 
 I'm in a position now where I can make a reasonable promise that hvac  
 will not only be maintained but undergoing continued development. I  
 just, for example, hooked in some basic postgresql support yesterday  
 (although the changes are not tested/pushed to the repo yet).  
 (Speaking of which, I noticed that the HDBC postgresql bindings don't  
 set seState on an error -- is this intentional, or something that  
 should be fixed up?)

That would be a bug, I think.  Could you submit it over at
software.complete.org, ideally with test code?  (Even more ideally, with
a patch grin)

-- John
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: How would you hack it?

2008-06-04 Thread Achim Schneider
Gregory Collins [EMAIL PROTECTED] wrote:

 Andrew Coppin [EMAIL PROTECTED] writes:
 
  Clearly, what I *should* have done is think more about a good
  abstraction before writing miles of code. ;-) So how would you guys
  do this?
 
 If you want text that roughly resembles English, you're better off
 getting a corpus of real English text and running it through a Markov
 chain. Mark Dominus has written a few blog posts about this topic
 recently, see http://blog.plover.com/lang/finnpar.html.
 
If you run one over obscure academic papers, you can even generate
publishable results. I don't have a link ready, but there was a fun
incident involving this.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Henning Thielemann

On Wed, 4 Jun 2008, Andrew Coppin wrote:

 How would you do this?

 The approach I came up with is to slurp up the words like so:

   raw - readFile words.txt
   let ws = words raw
   let n = length ws
   let wa = listArray (1,n) ws

 (I actually used lazy ByteStrings of characters.) So now I have an array
 of packed ByteStrings, and I can pick array indexes at random and use
 unwords to build my gibberish sentences.

Sounds like a generator for scientific articles. :-)
Maybe
   http://hackage.haskell.org/cgi-bin/hackage-scripts/package/markov-chain
 can be of help for you. It's also free of randomIO.

 In my current implementation, all of this is in the IO monad (so I can
 pick things randomly).

You know of
   http://www.haskell.org/pipermail/haskell-cafe/2006-December/020005.html
?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How would you hack it?

2008-06-04 Thread Henning Thielemann

On Wed, 4 Jun 2008, Achim Schneider wrote:

 Gregory Collins [EMAIL PROTECTED] wrote:

  Andrew Coppin [EMAIL PROTECTED] writes:
 
   Clearly, what I *should* have done is think more about a good
   abstraction before writing miles of code. ;-) So how would you guys
   do this?
 
  If you want text that roughly resembles English, you're better off
  getting a corpus of real English text and running it through a Markov
  chain. Mark Dominus has written a few blog posts about this topic
  recently, see http://blog.plover.com/lang/finnpar.html.
 
 If you run one over obscure academic papers, you can even generate
 publishable results. I don't have a link ready, but there was a fun
 incident involving this.

A famous paper generator is
   http://pdos.csail.mit.edu/scigen/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Andrew Coppin

Gregory Collins wrote:

Andrew Coppin [EMAIL PROTECTED] writes:

  

Clearly, what I *should* have done is think more about a good
abstraction before writing miles of code. ;-) So how would you guys do
this?



If you want text that roughly resembles English, you're better off
getting a corpus of real English text and running it through a Markov
chain. Mark Dominus has written a few blog posts about this topic
recently, see http://blog.plover.com/lang/finnpar.html.
  


This is probably overkill for my purposes.

However, if you can find me a source that explains what a Markov chain 
actually *is*, I'd be quite interested.


[I've seen it mentioned several times in relation to data compression, 
but Wikipedia's article is too cryptic for me to comprehend. I think 
Wikipedia is just a poor way to learn completely new concepts; 
Wikipedia's description of digital filters is also utterly 
incomprehensible, but it turns out the subject isn't actually that hard.]


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: How would you hack it?

2008-06-04 Thread Andrew Coppin

Achim Schneider wrote:

Andrew Coppin [EMAIL PROTECTED] wrote:

  

I have a file that contains several thousand words, seperated by
white space. [I gather that on Unix there's a standard location for
this file?]

Looking at /usr/share/dict/words, I'm assured that the proper seperator
is \n.
  


Thanks. I did look around trying to find this, but ultimately failed. 
(Is it a standard component, or is it installed as part of some specific 
application?)


As I understand it, Haskell's words function will work on any kind of 
white space - spaces, line feeds, caridge returns, tabs, etc. - so it 
should be fine. ;-) Since I'm developing on Windows, what I actually did 
was have Google find me a file online that I can download.


[Remember my post a while back? GHC panic? Apparently GHC doesn't like 
it if you try to represent the entire 400 KB file as a single [String]...]



Generate a Map Int [String] map, with the latter list being an infinite
list of words with that particular size.

Now assume that you want to have a 100 character sentence. You start by
looking if you got any 100 character word, if yes it's your sentence,
if not you divide it in half (maybe offset by a weighted random
factor [1]) and start over again.

You can then specify your whole document along the lines of

(capitalise $ words 100) ++ .  ++ (capitalise $ words 10) ++ ? ++
(capitalise $ words 20) ++ oneone1! 


[1] Random midpoint displacement is a very interesting topic by itself.
  


I'm not following your logic, sorry...

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: How would you hack it?

2008-06-04 Thread Achim Schneider
Andrew Coppin [EMAIL PROTECTED] wrote:

 Achim Schneider wrote:
  Andrew Coppin [EMAIL PROTECTED] wrote:
 

  I have a file that contains several thousand words, seperated by
  white space. [I gather that on Unix there's a standard location for
  this file?]
  Looking at /usr/share/dict/words, I'm assured that the proper
  seperator is \n.

 
 Thanks. I did look around trying to find this, but ultimately failed. 
 (Is it a standard component, or is it installed as part of some
 specific application?)
 
[EMAIL PROTECTED] ~ % equery b /usr/share/dict/words
[ Searching for file(s) /usr/share/dict/words in *... ]
sys-apps/miscfiles-1.4.2 (/usr/share/dict/words)
[EMAIL PROTECTED] ~ % eix miscfiles
[I] sys-apps/miscfiles
 Available versions:  1.4.2 {minimal}
 Installed versions:  1.4.2(18:27:27 02/14/07)(-minimal)
 Homepage:http://www.gnu.org/directory/miscfiles.html
 Description: Miscellaneous files

  Generate a Map Int [String] map, with the latter list being an
  infinite list of words with that particular size.
 
  Now assume that you want to have a 100 character sentence. You
  start by looking if you got any 100 character word, if yes it's
  your sentence, if not you divide it in half (maybe offset by a
  weighted random factor [1]) and start over again.
 
  You can then specify your whole document along the lines of
 
  (capitalise $ words 100) ++ .  ++ (capitalise $ words 10) ++ ?
  ++ (capitalise $ words 20) ++ oneone1! 
 
  [1] Random midpoint displacement is a very interesting topic by
  itself. 
 
 I'm not following your logic, sorry...

That's probably because I just described the points and not the rest
of the morphisms... imagine some plumbing and tape between my sentences.

Midpoint displacement is a great way to achieve randomness while still
keeping a uniform appearance. In the defining paper, that I don't have
ready right now, an example was shown where a realistic outline of
Australia was generated from ten or so data points: If you display it
next to the actual outline, only a geographer could tell which one's
the fake.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] very hard to build darcs with win32 ghc-6.8.2!

2008-06-04 Thread Daniel Fischer
Am Mittwoch, 4. Juni 2008 22:26 schrieb Tim Newsham:

 Here's a small test program which uses FFI to SleepEx which I was
 not able to get working with win32 ghc-6.8.2.

 --
 {-# OPTIONS -fglasgow-exts -fffi #-}
 module Main where
 import Foreign.C.Types

 foreign import ccall SleepEx c_SleepEx :: CUInt - CInt - IO CInt

I seem to remember it should be stdcall on windows.
That might also have a role in not finding [EMAIL PROTECTED]

 main = do
  putStrLn start
  n - c_SleepEx (2*1000) 1
  print n
 ---

 So, what is going on with ghc-6.8.2?  Why is the gcc so hard to use
 now?  Why can't I get FFI working with standard win32 functions?
 Why aren't there prebuilt win32 darcs binaries anymore?

 Tim Newsham
 http://www.thenewsh.com/~newsham/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Andrew Coppin

Henning Thielemann wrote:

Sounds like a generator for scientific articles. :-)
Maybe
   http://hackage.haskell.org/cgi-bin/hackage-scripts/package/markov-chain
 can be of help for you. It's also free of randomIO.
  


That certainly looks interesting. Presumably if I train it right, it'll 
figure out that sentences need to start uppercase and end with a 
full-stop, and maybe have a few other punctuation marks thrown in. I'm 
not sure I trust it to generate valid LaTeX markup, but I can give it a 
try! ;-)


At any rate, thanks for the link - the possibilities for this look very 
interesting. (All sorts of data to be generated. And hey, maybe if I 
read the source I can even learn some of the theory behind it too...)


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread John Melesky

On Jun 4, 2008, at 3:50 PM, Andrew Coppin wrote:
However, if you can find me a source that explains what a Markov  
chain actually *is*, I'd be quite interested.


In a non-rigorous nutshell:

You have the word star. You want to pick another word to follow it.  
It turns out that, based on analyzing a corpus (corpus, here means  
bunch of text you extract information from), the following word  
pairs occur:


star wars (20% of the time)
star cluster (5% of the time)
star dust (10% of the time)
star system (25% of the time)
star -end-of-sentence- (5% of the time)
.
.
.

So you use those occurrence statistics to pick a feasible next word  
(let's choose system, since it's the highest probability here -- in  
practice you'd probably choose one randomly based on a weighted  
likelihood). Then you look for all the word pairs which start with  
system, and choose the next word in the same fashion. Repeat for as  
long as you want.


Those word-pair statistics, when you have them for all the words in  
your vocabulary, comprise the first-level Markov data for your corpus.


When you extend it to word triplets, it's second-level Markov data  
(and it will generate more reasonable fake text). You can build higher  
and higher Markov levels if you'd like.


And, ultimately, though the example is about text, you can use this  
method to generate realistic sequences of any sequential data.


Hope that helps.

-johnnn


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Lanny Ripple
You might want to skim Shannon's 'A Mathematical Theory of
Communcations'.  Part 1, Section 2 and 3 are almost exactly your
topic.  (Or at least the direction you've headed in. :)

  http://plan9.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf

  -ljr

Andrew Coppin wrote:
 Henning Thielemann wrote:
 Sounds like a generator for scientific articles. :-)
 Maybe
   
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/markov-chain
  can be of help for you. It's also free of randomIO.
   
 
 That certainly looks interesting. Presumably if I train it right, it'll
 figure out that sentences need to start uppercase and end with a
 full-stop, and maybe have a few other punctuation marks thrown in. I'm
 not sure I trust it to generate valid LaTeX markup, but I can give it a
 try! ;-)
 
 At any rate, thanks for the link - the possibilities for this look very
 interesting. (All sorts of data to be generated. And hey, maybe if I
 read the source I can even learn some of the theory behind it too...)
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: cabal and platform-independent haskell installation management (again) (Re: Re: Ubuntu and ghc)

2008-06-04 Thread Duncan Coutts

On Wed, 2008-06-04 at 17:22 +0200, Achim Schneider wrote:

 The question, IMHO, seems to be
 
 How would a package manager for a posix-compilant kinetic look like?

http://nixos.org/index.html


Duncan

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] very hard to build darcs with win32 ghc-6.8.2!

2008-06-04 Thread Duncan Coutts

On Wed, 2008-06-04 at 10:26 -1000, Tim Newsham wrote:
 I built darcs for win32 recently and it was much more difficult than
 it should be.  Probably most of the blame goes to ghc-6.8.2 binary
 release for win32.  Half of the effort is getting the zlib prereq
 working.
 
 Previously to build zlib for win32 ghc I did the following:
 http://www.haskell.org/pipermail/haskell-cafe/2007-March/023059.html

The recent versions of the zlib package on hackage bundle a complete
copy of the zlib C library for the benefit of windows users (it uses the
system zlib on all other systems).

So it should Just Worktm.


Duncan


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread Ryan Ingram
On 6/4/08, apfelmus [EMAIL PROTECTED] wrote:
 Note that lazy evaluation never wastes time; evaluating a term with lazy
 evaluation will always take less reduction steps than doing so eagerly or
 partly eagerly.

True, but you can still have a time leak; this is particularily
relevant in soft-real-time apps (like almost every app you use on a
regular basis, from your editor to games); a time leak is when a
computation that would take X time if evaluated every time step is
left for many timesteps without being evaluated, leading to a hitch in
responsiveness when it eventually is demanded N frames later taking
N*X time.

Eager applications almost never have this sort of time leak, but
it's easy for it to happen with lazy evaluation.

A simple example: consider a variable that holds the number of
timesteps since the app launched, for example.  Every time step it
gets incremented by 1.  If the result is evaluated every time step, it
takes a constant amount of time per timestep.  But if you go a long
time without evaluating it, you end up with both a space leak (as the
+1 thunks build up) but a time leak as well--when you eventually
evaluate it, it takes O(n) time, where n is the number of frames since
the variable was last evaluated.

  -- ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is the maturity of Haskell Web Frameworks

2008-06-04 Thread Paul L
Pardon me to hijack this thread, but I have an idea to build a
different kind of Web Framework and am not sure if somebody has
already done it.

The idea is to take REST further: every HTML page you see is a program
in its running state (as a continuation monad). Each click on its link
or form submission is seen as feeding data to resume its continuation.

So instead of writing a server-side program that responds to many CGI
calls, you write a single ordinary program that describe the
application logic, which during its execution gets represented as a
HTML page and interpreted by the server.

The server is then very much like a VM or an interpreter of an
embedded language, with execution stacks entirely encoded and stored
in each HTML page sent to the user and back from the user as an
encoded URL or form data. So the server is entirely stateless.

Besides providing scalability, the main advantage of this framework is
that web program can be written in a natural way which total ignores
stuffs like HTTP/CGI, protocol, session, client-server, etc, etc. Its
compiler or interpreter will figure out what part of the program data
(e.g. the code, the static environment) resides on the server side,
and what part is encoded into the HTML pages (e.g., the heap, the
dynamic environment).

Does such a beast exist or am I entirely day dreaming?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: What is the maturity of Haskell Web Frameworks

2008-06-04 Thread Achim Schneider
Paul L [EMAIL PROTECTED] wrote:

 Does such a beast exist or am I entirely day dreaming?

Daydreaming, at least partly: Just consider state that can't be stored
client-side at all, e.g. the contents of a wiki page.

Networking or HTML as interface isn't the real problem: It's the
multi-headedness of applications that make things complicated. As soon
as you have multiple programs or multiple instances of the same program
modify shared data, things get involved.

Traffic is another problem if state accumulates.

Thinking about REST, a nice feature to have would be session timeout
notifying: The server can keep the connection open and replace the
displayed page with one notifying the user that he's too slow for
current timeout settings.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Evan Laforge
On Wed, Jun 4, 2008 at 3:23 PM, Lanny Ripple [EMAIL PROTECTED] wrote:
 You might want to skim Shannon's 'A Mathematical Theory of
 Communcations'.  Part 1, Section 2 and 3 are almost exactly your
 topic.  (Or at least the direction you've headed in. :)

  http://plan9.bell-labs.com/cm/ms/what/shannonday/shannon1948.pdf

Rob Pike's Practice of Programming has a few chapters with a markov
generator in several different languages as a case study.  None of
them are functional, but it's a pretty clear description of the idea.

The plan9 in the url reminded me :)
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Laziness leaks

2008-06-04 Thread Achim Schneider
Ryan Ingram [EMAIL PROTECTED] wrote:

 On 6/4/08, apfelmus [EMAIL PROTECTED] wrote:
  Note that lazy evaluation never wastes time; evaluating a term with
  lazy evaluation will always take less reduction steps than doing so
  eagerly or partly eagerly.
 
 True, but you can still have a time leak; this is particularily
 relevant in soft-real-time apps (like almost every app you use on a
 regular basis, from your editor to games); a time leak is when a
 computation that would take X time if evaluated every time step is
 left for many timesteps without being evaluated, leading to a hitch in
 responsiveness when it eventually is demanded N frames later taking
 N*X time.
 
 Eager applications almost never have this sort of time leak, but
 it's easy for it to happen with lazy evaluation.
 
 A simple example: consider a variable that holds the number of
 timesteps since the app launched, for example.  Every time step it
 gets incremented by 1.  If the result is evaluated every time step, it
 takes a constant amount of time per timestep.  But if you go a long
 time without evaluating it, you end up with both a space leak (as the
 +1 thunks build up) but a time leak as well--when you eventually
 evaluate it, it takes O(n) time, where n is the number of frames since
 the variable was last evaluated.
 
There won't ever be a space leak without a time leak nor a time leak
without a space leak. I'd just call it a leak.

You don't come across space-leaks in strict programs often because
data is usually allocated statically even if execution is non-strict.

Piping /dev/zero into a program that just sleeps does leak space,
though.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] What is the maturity of Haskell Web Frameworks

2008-06-04 Thread Brandon S. Allbery KF8NH


On 2008 Jun 4, at 22:30, Paul L wrote:


The server is then very much like a VM or an interpreter of an
embedded language, with execution stacks entirely encoded and stored
in each HTML page sent to the user and back from the user as an
encoded URL or form data. So the server is entirely stateless.



Mmm, if any of that HTML-stored state is sensitive server information,  
this becomes a problem. (E.g. can I trick your application into  
thinking I'm an admin and then go starting/stopping processes,  
changing passwords. etc.?)  You need to use extra care if anything  
sensitive is put where the client can munge it.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Bulat Ziganshin
Hello Henning,

Thursday, June 5, 2008, 12:55:19 AM, you wrote:

 Sounds like a generator for scientific articles. :-)
 Maybe
   
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/markov-chain
  can be of help for you. It's also free of randomIO.

once i've used something like this to generate this brilliant text:

can INR-s
O, by thenion, Laertes,
These at these haste is defeling and mornione.

KING CLAUDIUS   Heat killowers, and our vowst'

GUILDENSTERNAy, magamovinob vow;
And lo leave they come, and he sole tobly all you,
And, or spomir: So, wherefore hiacriful at ast, my lord, I have no 
orason; Frous stonbitter; at I'll I think himbers, seempended body jooke!
That would be tracious arest, Rt: sor 'tis timeake or bifuchol
gugnme:
If thoughRqublest they are gave with this cercric if, for the no 
POLONIUS]

KING CLAUDIUS   I will violect of the queen with
Dity.

ROSENCRANTZ 'Stive to came fat: the
ne further then, which he his worldy, it is means me.



i've put a full copy to http://haskell.org/bz/not_exactly_hamlet.txt


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Bulat Ziganshin
Hello Andrew,

Thursday, June 5, 2008, 12:50:28 AM, you wrote:

 However, if you can find me a source that explains what a Markov chain
 actually *is*, I'd be quite interested.

*afaik*, the idea is simple: order-1 Markov chain is just text
generated with respect of probability of each char, and order-n is a
text generated using probability of each char in context of previous
n-1 chars. you just need to gather stats using previous papers :)

you can use this idea both to generate new naturally-looking words and
to gather stats about words itself and generate naturally-looking
sentences


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: library for drawing charts

2008-06-04 Thread Tim Docker
Neal Alexander wrote:
 I was using the HaskellCharts library and needed the same two things; so
 i rolled a quick and dirty pie chart generator (Barchart is on the TODO
 list).


 http://72.167.145.184:8000/Screenshot.png
 http://72.167.145.184:8000/PieChart.hs

Nice! Do you mind if I refactor that a bit and add it to the HaskellCharts
library? The license is BSD.

Tim


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Henning Thielemann

On Wed, 4 Jun 2008, Andrew Coppin wrote:

 Henning Thielemann wrote:
  Sounds like a generator for scientific articles. :-)
  Maybe
 http://hackage.haskell.org/cgi-bin/hackage-scripts/package/markov-chain
   can be of help for you. It's also free of randomIO.

 That certainly looks interesting. Presumably if I train it right, it'll
 figure out that sentences need to start uppercase and end with a
 full-stop, and maybe have a few other punctuation marks thrown in.

If you use different encodings for periods for abbreviations and sentence
ends, this is warranted.

 I'm not sure I trust it to generate valid LaTeX markup, but I can give
 it a try! ;-)

You may not want to construct a Markov Chain for characters but for larger
objects, say words and LaTeX commands. However when it comes to correct
parentheses, a Markov Chain might not be the appropriate tool, but a
grammar.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] How would you hack it?

2008-06-04 Thread Henning Thielemann

On Wed, 4 Jun 2008, John Melesky wrote:

 So you use those occurrence statistics to pick a feasible next word
 (let's choose system, since it's the highest probability here -- in
 practice you'd probably choose one randomly based on a weighted
 likelihood). Then you look for all the word pairs which start with
 system, and choose the next word in the same fashion. Repeat for as
 long as you want.

Markov chain means, that you have a sequence of random experiments,
where the outcome of each experiment depends exclusively on a fixed number
(the level) of experiments immediately before the current one.

 Those word-pair statistics, when you have them for all the words in
 your vocabulary, comprise the first-level Markov data for your corpus.

 When you extend it to word triplets, it's second-level Markov data
 (and it will generate more reasonable fake text). You can build higher
 and higher Markov levels if you'd like.

If the level is too high, you will just reproduce the training text.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe