Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Roman Cheplyaka
* Jason Dagit da...@codersbase.com [2010-06-24 20:52:03-0700]
 On Sat, Jun 19, 2010 at 1:06 AM, Roman Cheplyaka r...@ro-che.info wrote:
 
  While ghc 6.12 finally has proper locale support, core packages (such as
  unix) still use withCString and therefore work incorrectly when argument
  (e.g. file path) is not ASCII.
 
 
 Pardon me if I'm misunderstanding withCString, but my understanding of unix
 paths is that they are to be treated as strings of bytes.  That is, unlike
 windows, they do not have an encoding predefined.  Furthermore, you could
 have two filepaths in the same directory with different encodings due to
 this.
 
 In this case, what would be the correct way of handling the paths?
  Converting to a Haskell String would require knowing the encoding, right?
  My reasoning is that Haskell Char type is meant to correspond to code
 points so putting them into a string means you have to know their code point
 which is different from their (multi-)byte value right?
 
 Perhaps I have some details wrong?  If so, please clarify.

Jason,

you got everything right here. So, as you said, there is a mismatch
between representation in Haskell (list of code points) and
representation in the operating system (list of bytes), so we need to
know the encoding. Encoding is supplied by the user via locale
(https://secure.wikimedia.org/wikipedia/en/wiki/Locale), particularly
LC_CTYPE variable.

The problem with encodings is not new -- it was already solved e.g. for
input/output.

As I said, I'm willing to prepare the patches, but I really need a
mentor for this.

-- 
Roman I. Cheplyaka :: http://ro-che.info/
Don't let school get in the way of your education. - Mark Twain
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [***SPAM***] Re: [Haskell-cafe] How does one get off haskell?

2010-06-25 Thread jur

On Jun 24, 2010, at 10:41 PM, cas...@istar.ca wrote:

 Quoting Andrew Coppin andrewcop...@btinternet.com:
 
 Serguey Zefirov wrote:
 I should suggest code generation from Haskell to C#/Java and PHP.
 
 Like Barrelfish, Atom, HJScript and many others EDSLs out there.
 
 You will save yourself time, you will enjoy Haskell. Probably, you
 will have problems with management because your programs will appear
 there in their completeness very suddently. ;
 
 I would imagine a bigger problem is that machine-generated C# is probably 
 incomprehensible to humans. ;-)
 
 
 Most machine-generated code is probably incomprehensible to humans. :)
 
 What one wants is a translator back and forth, if one could understand the 
 machine-generated code.
 
Maybe you should translate to Perl. Nobody will notice it is machine-generated.

Jur


 
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Mining Twitter data in Haskell and Clojure

2010-06-25 Thread Simon Marlow

On 24/06/2010 21:40, Claus Reinke wrote:

I'll work with Simon to investigate the runtime, but would welcome any
ideas on further speeding up cafe4.


An update on this: with the help of Alex I tracked down the problem
(an integer overflow bug in GHC's memory allocator), and his program
now runs to completion.


So this was about keeping the program largely unchanged in order to keep
the GHC issue repeatable for tracking? Or have you also looked into
removing space leaks in the code (there still seemed to be some left in
the intern/cafe5 version, iirc)?


I haven't touched the code - I'll leave that to the expertise of 
haskell-cafe! :)


It would be nice to have a boiled-down version of this for a benchmark 
though.


Cheers,
Simon


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/25/10 02:42 , Roman Cheplyaka wrote:
 * Jason Dagit da...@codersbase.com [2010-06-24 20:52:03-0700]
 On Sat, Jun 19, 2010 at 1:06 AM, Roman Cheplyaka r...@ro-che.info wrote:
 While ghc 6.12 finally has proper locale support, core packages (such as
 unix) still use withCString and therefore work incorrectly when argument
 (e.g. file path) is not ASCII.

 Pardon me if I'm misunderstanding withCString, but my understanding of unix
 paths is that they are to be treated as strings of bytes.  That is, unlike
 windows, they do not have an encoding predefined.  Furthermore, you could
 have two filepaths in the same directory with different encodings due to
 this.
 
 you got everything right here. So, as you said, there is a mismatch
 between representation in Haskell (list of code points) and
 representation in the operating system (list of bytes), so we need to
 know the encoding. Encoding is supplied by the user via locale
 (https://secure.wikimedia.org/wikipedia/en/wiki/Locale), particularly
 LC_CTYPE variable.

You might want to look at how Python is dealing with this (including the
pain involved; best to learn from example).

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkwkcAYACgkQIn7hlCsL25W4BgCfVEyndklgo2TOyyemqdTKGkvS
dBMAoKq3t9vMOkZZHiEHkIN5IDjgVbRt
=69C5
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Read instance for GATD

2010-06-25 Thread corentin . dupont

Hello Haskellers,

I'm having trouble writing a Read Instance for my GATD.
Arg this GATD!! It causes me more problems that it solves ;)
Especially with no automatic deriving, it adds a lot of burden to my code.

data Obs a where
ProposedBy :: Obs Int   -- The player that proposed the tested
rule
Turn   :: Obs Turn  -- The current turn
Official   :: Obs Bool  -- whereas the tested rule is official
Equ:: (Eq a, Show a, Typeable a) = Obs a - Obs a - Obs Bool
Plus   :: (Num a) = Obs a - Obs a - Obs a
Time   :: (Num a) = Obs a - Obs a - Obs a
Minus  :: (Num a) = Obs a - Obs a - Obs a
And:: Obs Bool - Obs Bool - Obs Bool
Or :: Obs Bool - Obs Bool - Obs Bool
Not:: Obs Bool - Obs Bool
Konst  :: a - Obs a


 instance Read a = Read (Obs a) where
 readPrec = (prec 10 $ do
Ident ProposedBy - lexP
return (ProposedBy))
 +++
  (prec 10 $ do
Ident Official - lexP
return (Official))
  (etc...)

Observable.lhs:120:8:
Couldn't match expected type `Int' against inferred type `Bool'
  Expected type: ReadPrec (Obs Int)
  Inferred type: ReadPrec (Obs Bool)


Indeed ProposedBy does not have the same type that Official.
Mmh how to make it all gently mix altogether?


Best,
Corentin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] GHCi and State

2010-06-25 Thread corentin . dupont


Another couple of reflexions (sorry for monopolizing):

1. Since i am making a Nomic game, players will have to submit rules. These
rules will be written in a sub-set of haskell.
Instead of writing my own reader/interpreter, i'd like to use GHC to compil
them on the fly, and then add them to the current legislation.
What would you suggest me to do that? Any pointers?

2. For now, the game is more or less playable in GHCi. But my concern is:
When you use GHCi, you are in the IO monad, right? How to had state to this
monad?
I would like that the player can compose his rule in GHCi, and when he is
done, he can submit it in GHCi with something like:

*Nomic submitRule myrule

And then the game takes the rule, possibly modify the current legislation,
and give the hand back to GHCi.
So the current legislation has to be a state of the GHCi's loop. Is this
possible at all?
submitRule would have a type more or less like that (GameState contains the
legislation):

submitRule :: Rule - StateT GameState IO ()


Thanks for  your attention! I know this is a bit confused!

Best,
Corentin


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi and State

2010-06-25 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/25/10 05:07 , corentin.dup...@ext.mpsa.com wrote:
 Instead of writing my own reader/interpreter, i'd like to use GHC to compil
 them on the fly, and then add them to the current legislation.
 What would you suggest me to do that? Any pointers?

GHC API.  This is likely biting off more than you want to chew, though;
it'll probably be easier to write your own interpreter,

 2. For now, the game is more or less playable in GHCi. But my concern is:
 When you use GHCi, you are in the IO monad, right? How to had state to this
 monad?

  runStateT nomicGame initialState :: IO (a,GameState)
  -- nomicGame :: StateT GameState IO a
  -- initialState :: GameState
  -- use evalStateT instead of runStateT if all you want is the result,
  -- or execStateT if all you want is the final state.
  -- if you want neither:
  --   _ - runStateT ...

 I would like that the player can compose his rule in GHCi, and when he is
 done, he can submit it in GHCi with something like:
 
 *Nomic submitRule myrule
 
 And then the game takes the rule, possibly modify the current legislation,
 and give the hand back to GHCi.
 So the current legislation has to be a state of the GHCi's loop. Is this
 possible at all?

Use an IORef to contain the state, if you really want to go this way.  I
wouldn't; take a look at the lambdabot source for the pitfalls of passing
arbitrary user-provided code to GHCi (or GHC API), and how to avoid them.

(In particular, if you're using GHC to parse your rules, what stops the user
code from mangling the GameState on you?)

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkwkdygACgkQIn7hlCsL25WfhgCgo2qfkoA0yBaXsrjQNT+xePSb
vJMAnjLQnOtaByKXSsFvLuclcFt7vhEg
=jnru
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: signficant improvements to the containers package

2010-06-25 Thread Ketil Malde
Don Stewart d...@galois.com writes:

 Some people might be quite excited by Milan's work on significant
 performance improvements to the containers package...

Yes, this is great news - both a well written article and an important
piece of work on a cornerstone of the Haskell libraries.

But I am also somewhat disturbed that, all this time I've been using
Data.Set/Map, AVL has apparently been a much faster alternative, and I
never got around to trying it out.  Is there a downside to using AVL
compared to Data.Set/Map?  And would similar improvements apply to it?

When performance is starting to matter, it is often because I have large
Sets or Maps - it would also be interesting to compare the memory
requirement of these data structures.  (No matter how much faster a
HashMap is, if it exceeds physical RAM it will be outperformed by any
structure that manages to fit).

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHCi and State

2010-06-25 Thread Roman Beslik

 On 25.06.10 12:07, corentin.dup...@ext.mpsa.com wrote:

2. For now, the game is more or less playable in GHCi. But my concern is:
When you use GHCi, you are in the IO monad, right? How to had state to this
monad?
I would like that the player can compose his rule in GHCi, and when he is
done, he can submit it in GHCi with something like:

*Nomic  submitRulemyrule
You can store a set of rules in IORef or another IO-mutable type. I 
think you are in the IO monad is pretty vague. Obviously, GHCi runs 
pure computations also.


--
Best regards,
  Roman Beslik.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: signficant improvements to the containers package

2010-06-25 Thread Stephen Tetley
On 25 June 2010 05:58, Ivan Miljenovic ivan.miljeno...@gmail.com wrote:

 The reason this came up: Thomas Berekneyi wanted to use such classes
 for the rewrite of FGL, and when he discussed it on #haskell people
 generally indicated that edison was the best out there but was a bit
 long in the tooth and something like it should be re-written (though
 no-one seemed to volunteer... hmmm... :p).

Hi Ivan

Note that if new-FGL depends on new packages it might not be
acceptable for the Platform; see the thread that this message kicks
off:

http://www.haskell.org/pipermail/libraries/2009-August/012378.html

Best wishes

Stephen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: signficant improvements to the containers package

2010-06-25 Thread Ivan Lazar Miljenovic
Stephen Tetley stephen.tet...@gmail.com writes:

 On 25 June 2010 05:58, Ivan Miljenovic ivan.miljeno...@gmail.com wrote:

 The reason this came up: Thomas Berekneyi wanted to use such classes
 for the rewrite of FGL, and when he discussed it on #haskell people
 generally indicated that edison was the best out there but was a bit
 long in the tooth and something like it should be re-written (though
 no-one seemed to volunteer... hmmm... :p).

 Hi Ivan

 Note that if new-FGL depends on new packages it might not be
 acceptable for the Platform; see the thread that this message kicks
 off:

 http://www.haskell.org/pipermail/libraries/2009-August/012378.html

That's been brought up already, and yes I know.

-- 
Ivan Lazar Miljenovic
ivan.miljeno...@gmail.com
IvanMiljenovic.wordpress.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Mining Twitter data in Haskell and Clojure

2010-06-25 Thread Simon Marlow

On 25/06/2010 04:23, Don Stewart wrote:

deliverable:

Simon -- so how can I get me a new ghc now?  From git, I suppose?  (It
used to live in darcs...)


It still lives in darcs.

Nightly builds are here: http://www.haskell.org/ghc/dist/stable/dist/

You'll  want to check with Simon that the patch got pushed, though,
first.


The patch got pushed to HEAD yesterday

http://darcs.haskell.org/cgi-bin/darcsweb.cgi?r=ghc;a=darcs_commitdiff;h=20100624104654-12142-8a2be995610fba8e37531f7d57a7f1202ed688d7.gz

and was in last night's builds (24 Jun):

http://www.haskell.org/ghc/dist/current/dist/

You could easily cherry-pick the patch into the stable branch, but we're 
not planning to do any more releases of 6.12 so that we can focus on 6.14.


Cheers,
Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: proposal: HaBench, a Haskell Benchmark Suite

2010-06-25 Thread Simon Marlow

On 25/06/2010 00:24, Andy Georges wrote:


I've picked up the HaBench/nofib/nobench issue again, needing a
decent set of real applications to do some exploring of what people
these days call split-compilation. We have a framework that was able
to explore GCC optimisations [1] -- the downside there was the
dependency of these optimisations on each other, requiring them to be
done in certain order -- for a multi-objective search space, and
extended this to exploring a JIT compiler [2] for Java in our case --
which posed its own problems. Going one step further, we'd like to
explore the tradeoffs that can be made when compiling on different
levels: source to bytecode (in some sense) and bytecode to native.
Given that LLVM is quicly becoming a state-of-the-art framework and
with the recent GHC support, we figured that Haskell would be an
excellent vehicle to conduct our exploration and research (and the
fact that some people at our lab have a soft spot for Haskell helps
too). Which brings me back to benchmarks.

Are there any inputs available that allow the real part of the suite
to run for a sufficiently long time? We're going to use criterion in
any case given our own expertise with rigorous benchmarking [3,4],
but since we've made a case in the past against short running apps on
managed runtime systems [5], we'd love to have stuff that runs at
least in the order of seconds, while doing useful things. All
pointers are much appreciated.


The short answer is no, although some of the benchmarks have tunable 
input sizes (mainly the spectral ones) and you can 'make mode=slow' to 
run those with larger inputs.


More generally, the nofib suite really needs an overhaul or replacement. 
 Unfortunately it's a tiresome job and nobody really wants to do it. 
There have been various abortive efforts, including nobench and HaBench. 
 Meanwhile we in the GHC camp continue to use nofib, mainly because we 
have some tool infrastructure set up to digest the results 
(nofib-analyse).  Unfortunately nofib has steadily degraded in 
usefulness over time due to both faster processors and improvements in 
GHC, such that most of the programs now run for less than 0.1s and are 
ignored by the tools when calculating averages over the suite.


We have a need not just for plain Haskell benchmarks, but benchmarks 
that test


 - GHC extensions, so we can catch regressions
 - parallelism (see nofib/parallel)
 - concurrency (see nofib/smp)
 - the garbage collector (see nofib/gc)

I tend to like quantity over quality: it's very common to get just one 
benchmark in the whole suite that shows a regression or exercises a 
particular corner of the compiler or runtime.  We should only keep 
benchmarks that have a tunable input size, however.


Criterion works best on programs that run for short periods of time, 
because it runs the benchmark at least 100 times, whereas for exercising 
the GC we really need programs that run for several seconds.  I'm not 
sure how best to resolve this conflict.


Meanwhile, I've been collecting pointers to interesting programs that 
cross my radar, in anticipation of waking up with an unexpectedly free 
week in which to pull together a benchmark suite... clearly 
overoptimistic!  But I'll happily pass these pointers on to anyone with 
the inclination to do it.


Cheers,
Simon



Or if any of you out there have (recent) apps with inputs that are
open source ... let us know.

-- Andy


[1] COLE: Compiler Optimization Level Exploration, Kenneth Hoste and
Lieven Eeckhout, CGO 2008 [2] Automated Just-In-Time Compiler Tuning,
Kenneth Hoste, Andy Georges and Lieven Eeckhout, CGO 2010 [3]
Statistically Rigorous Java Performance Evaluation, Andy Georges,
Dries Buytaert and Lieven Eeckhout, OOPSLA 2007 [4] Java Performance
Evaluation through Rigorous Replay Compilation, Andy Georges, Lieven
Eeckhout and Dries Buytaert, OOPSLA 2008 [5] How Java Programs
Interact with Virtual Machines at the Microarchitectural Level,
Lieven Eeckhout, Andy Georges, Koen De Bosschere, OOPSLA 2003


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Réf. : Re: [Haskell-cafe] GHCi and State

2010-06-25 Thread corentin . dupont
Hello,
thank you for your answer.

Brandon,
Indeed I think that i should write my own interpreter for a first version
of the game. I would be very instructive.

But then, i'd like that the player could use the full power of Haskell to
write their own rules during game play.
Neat functions like map, filter etc. could be used by the player to write
rules.
Perhaps Hint is good for me.

 take a look at the lambdabot source for the pitfalls of passing
arbitrary user-provided code to GHCi (or GHC API), and how to avoid them.
(In particular, if you're using GHC to parse your rules, what stops the
user
code from mangling the GameState on you?)

The code passed is not that arbitrary, it must have type Rule.
This type would enforce certain constructions, and actions...

Roman,
for GHCi, i will try an IORef. Too bad i allready coded it using StateT
GameState IO () extensively through the code ;)

Corentin



   
 Brandon S 
 Allbery KF8NH 
 allb...@ece.cmu Pour
 .edu haskell-cafe@haskell.org
 Envoyé par :   cc
 haskell-cafe-bou  
 n...@haskell.orgObjet
   Re: [Haskell-cafe] GHCi and State
   
 25/06/2010 11:30  
   
   
   
   




-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/25/10 05:07 , corentin.dup...@ext.mpsa.com wrote:
 Instead of writing my own reader/interpreter, i'd like to use GHC to
compil
 them on the fly, and then add them to the current legislation.
 What would you suggest me to do that? Any pointers?

GHC API.  This is likely biting off more than you want to chew, though;
it'll probably be easier to write your own interpreter,

 2. For now, the game is more or less playable in GHCi. But my concern is:
 When you use GHCi, you are in the IO monad, right? How to had state to
this
 monad?

  runStateT nomicGame initialState :: IO (a,GameState)
  -- nomicGame :: StateT GameState IO a
  -- initialState :: GameState
  -- use evalStateT instead of runStateT if all you want is the result,
  -- or execStateT if all you want is the final state.
  -- if you want neither:
  --   _ - runStateT ...

 I would like that the player can compose his rule in GHCi, and when he is
 done, he can submit it in GHCi with something like:

 *Nomic submitRule myrule

 And then the game takes the rule, possibly modify the current
legislation,
 and give the hand back to GHCi.
 So the current legislation has to be a state of the GHCi's loop. Is
this
 possible at all?

Use an IORef to contain the state, if you really want to go this way.  I
wouldn't; take a look at the lambdabot source for the pitfalls of passing
arbitrary user-provided code to GHCi (or GHC API), and how to avoid them.

(In particular, if you're using GHC to parse your rules, what stops the
user
code from mangling the GameState on you?)

- --
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkwkdygACgkQIn7hlCsL25WfhgCgo2qfkoA0yBaXsrjQNT+xePSb
vJMAnjLQnOtaByKXSsFvLuclcFt7vhEg
=jnru
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: proposal: HaBench, a Haskell Benchmark Suite

2010-06-25 Thread Andy Georges
Hi Simon et al,

On Jun 25, 2010, at 14:39 PM, Simon Marlow wrote:

 On 25/06/2010 00:24, Andy Georges wrote:
 
 snip 
 Are there any inputs available that allow the real part of the suite
 to run for a sufficiently long time? We're going to use criterion in
 any case given our own expertise with rigorous benchmarking [3,4],
 but since we've made a case in the past against short running apps on
 managed runtime systems [5], we'd love to have stuff that runs at
 least in the order of seconds, while doing useful things. All
 pointers are much appreciated.
 
 The short answer is no, although some of the benchmarks have tunable input 
 sizes (mainly the spectral ones) and you can 'make mode=slow' to run those 
 with larger inputs.
 
 More generally, the nofib suite really needs an overhaul or replacement.  
 Unfortunately it's a tiresome job and nobody really wants to do it. There 
 have been various abortive efforts, including nobench and HaBench.  Meanwhile 
 we in the GHC camp continue to use nofib, mainly because we have some tool 
 infrastructure set up to digest the results (nofib-analyse).  Unfortunately 
 nofib has steadily degraded in usefulness over time due to both faster 
 processors and improvements in GHC, such that most of the programs now run 
 for less than 0.1s and are ignored by the tools when calculating averages 
 over the suite.

Right. I have the distinct feeling this is a major lack in the Haskell world. 
SPEC evolved over time to include larger benchmarks that still excercise the 
various parts of the hardware, such that the benchmarks does not achieve 
suddenly a large improvement on a new architecture/implementation due to e.g. a 
larger cache and the working sets remain in the cache for the entire execution. 
The Haskell community has nothing that remotely resembles a decent suite. You 
could do experiments and show that over 10K iterations, the average execution 
time per iteration goes from 500ms to 450ms, but what does this really mean? 

 We have a need not just for plain Haskell benchmarks, but benchmarks that test
 
 - GHC extensions, so we can catch regressions
 - parallelism (see nofib/parallel)
 - concurrency (see nofib/smp)
 - the garbage collector (see nofib/gc)
 
 I tend to like quantity over quality: it's very common to get just one 
 benchmark in the whole suite that shows a regression or exercises a 
 particular corner of the compiler or runtime.  We should only keep benchmarks 
 that have a tunable input size, however.

I would suggest that the first category might be made up of microbenchmarks, as 
I do not think it really is needed for performance per se. However, the other 
categories really need long-running benchmarks, that use (preferable) heaps of 
RAM, even when they're well tuned.

 Criterion works best on programs that run for short periods of time, because 
 it runs the benchmark at least 100 times, whereas for exercising the GC we 
 really need programs that run for several seconds.  I'm not sure how best to 
 resolve this conflict.

I'm not sure about this. Given the fact that there's quite some non-determinism 
in modern CPUs and that computer systems seem to behave chaotically [1], I 
definitely see the need to employ Criterion for longer running applications as 
well. It might not  need 100 executions, or multiple iterations per execution 
(incidentally, those iterations, can they be said to be independent?), but 
somewhere around 20 - 30 seems to be a minimum. 

 
 Meanwhile, I've been collecting pointers to interesting programs that cross 
 my radar, in anticipation of waking up with an unexpectedly free week in 
 which to pull together a benchmark suite... clearly overoptimistic!  But I'll 
 happily pass these pointers on to anyone with the inclination to do it.


I'm definitely interested. If I want to make a strong case for my current 
research, I really need benchmarks that can be used. Additionally, coming up 
with a good suite, characterising it, can easily result is a decent paper, that 
is certain to be cited numerous times. I think it would have to be a 
group/community effort though. I've looked through the apps on the Haskell wiki 
pages, but there's not much usable there, imho. I'd like to illustrate this by 
the dacapo benchmark suite [2,3] example. It took a while, but now everybody in 
the Java camp is (or should be) using these benchmarks. Saying that we just do 
not want to do this, is simply not plausible to maintain. 


-- Andy


[1]  Computer systems are dynamical systems, Todd Mytkowicz, Amer Diwan, and 
Elizabeth Bradley, Chaos 19, 033124 (2009); doi:10.1063/1.3187791 (14 pages).
[2] The DaCapo benchmarks: java benchmarking development and analysis, Stephen 
Blackburn et al, OOPSLA 2006
[3] Wake up and smell the coffee: evaluation methodology for the 21st century, 
Stephen Blackburn et al, CACM 2008

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org

[Haskell-cafe] Re: proposal: HaBench, a Haskell Benchmark Suite

2010-06-25 Thread Simon Marlow

On 25/06/2010 14:01, Andy Georges wrote:



Right. I have the distinct feeling this is a major lack in the
Haskell world. SPEC evolved over time to include larger benchmarks
that still excercise the various parts of the hardware, such that the
benchmarks does not achieve suddenly a large improvement on a new
architecture/implementation due to e.g. a larger cache and the
working sets remain in the cache for the entire execution. The
Haskell community has nothing that remotely resembles a decent suite.
You could do experiments and show that over 10K iterations, the
average execution time per iteration goes from 500ms to 450ms, but
what does this really mean?


We have a need not just for plain Haskell benchmarks, but
benchmarks that test

- GHC extensions, so we can catch regressions - parallelism (see
nofib/parallel) - concurrency (see nofib/smp) - the garbage
collector (see nofib/gc)

I tend to like quantity over quality: it's very common to get just
one benchmark in the whole suite that shows a regression or
exercises a particular corner of the compiler or runtime.  We
should only keep benchmarks that have a tunable input size,
however.


I would suggest that the first category might be made up of
microbenchmarks, as I do not think it really is needed for
performance per se. However, the other categories really need
long-running benchmarks, that use (preferable) heaps of RAM, even
when they're well tuned.


The categories you mention aren't necessarily distinct: we have several 
microbenchmarks that run for a long time and use a lot of heap.  For 
testing the GC, as with other parts of the system, we need both 
microbenchmarks and larger programs.  Different people want different 
things from a benchmark suite: if you're demonstrating the efficacy of 
an optimisation or a language implementation, then you want just the 
real benchmarks, whereas if you're a compiler developer you probably 
want the microbenchmarks too, because investigating their performance 
tends to be more tractable, and the hope is that if you optimise all the 
microbenchmarks then the real programs will take care of themselves (it 
doesn't always work like that, but it's a good way to start).


So I still very much like the approach taken by the venerable nofib 
suite where it includes not only the real programs, but also the 
microbenchmarks and the small programs; you don't have to use these in 
published results, but they're invaluable to us compiler developers, and 
having a shared framework for all the benchmarks makes things a lot easier.


If we made it *really easy* for people to submit their own programs 
(e.g. using 'darcs send') then we might get a lot of contributions, from 
which we could cherry-pick for the real benchmark suite, while 
keeping most/all of the submissions for the full suite.  Similarly, we 
should make it really easy for people to run the benchmark suite on 
their own machines and compilers - make the tools cabal-installable, 
with easy ways to generate results.



I'm definitely interested. If I want to make a strong case for my
current research, I really need benchmarks that can be used.
Additionally, coming up with a good suite, characterising it, can
easily result is a decent paper, that is certain to be cited numerous
times. I think it would have to be a group/community effort though.
I've looked through the apps on the Haskell wiki pages, but there's
not much usable there, imho. I'd like to illustrate this by the
dacapo benchmark suite [2,3] example. It took a while, but now
everybody in the Java camp is (or should be) using these benchmarks.
Saying that we just do not want to do this, is simply not plausible
to maintain.


Oh, don't get me wrong - we absolutely do want to do this, it's just 
difficult to get motivated to actually do it.  It's great that you're 
interested, I'll help any way that I can, and I'll start by digging up 
some suggestions for benchmarks.


Cheers,
Simon





-- Andy


[1]  Computer systems are dynamical systems, Todd Mytkowicz, Amer
Diwan, and Elizabeth Bradley, Chaos 19, 033124 (2009);
doi:10.1063/1.3187791 (14 pages). [2] The DaCapo benchmarks: java
benchmarking development and analysis, Stephen Blackburn et al,
OOPSLA 2006 [3] Wake up and smell the coffee: evaluation methodology
for the 21st century, Stephen Blackburn et al, CACM 2008



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] RE: proposal: HaBench, a Haskell Benchmark Suite

2010-06-25 Thread Simon Peyton-Jones
I'm delighted that you are interested in this benchmarking stuff.  Much needed. 
 Thank you!

| So I still very much like the approach taken by the venerable nofib
| suite where it includes not only the real programs, but also the
| microbenchmarks and the small programs; you don't have to use these in
| published results, but they're invaluable to us compiler developers, and
| having a shared framework for all the benchmarks makes things a lot easier.

Yes yes. It's *essential* to retain the micro-benchmarks. They often show up in 
high relief a performance regression that would be hidden or much less 
prominent in a big program.

The three-way split imaginary/spectral/real has served us well.  Let's keep it!

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] CnC Haskell

2010-06-25 Thread David Peixotto
There is a reference for the CnC grammar in the repository for the .NET 
implementation. 

http://github.com/dmpots/CnC.NET/blob/master/CnC.NET/CnC.NET/cnc.grammar

The parser specification for fsyacc (the F# YACC implementation) is here:

http://github.com/dmpots/CnC.NET/blob/master/CnC.NET/CnC.NET/Parser.fsy

The textual representation is still in flux a bit, but this grammar should be 
enough of a guide for implementing a parser in Haskell. The grammar is left 
recursive, so using a parser generator like Happy would be a good choice.

The textual representation will actually be a bit different depending on the 
underlying language since the types of items stored in a collection is part of 
the description. For example in C, an item collection that stores an array of 
ints would be declared like:

[int* A];

but in Haskell we would want to write something like

[Array Int Int A];

I think dealing with type declarations would in the textual representation 
would be the main difference in implementing the parser in Haskell. Once the 
textual representation has been parsed to an AST it should be possible to 
generate the Haskell code that builds the graph using the haskell-cnc package.

-David

On Jun 23, 2010, at 3:56 PM, Vasili I. Galchin wrote:

 
 
 On Wed, Jun 23, 2010 at 3:47 PM, Don Stewart d...@galois.com wrote:
 vigalchin:
  Hello,
 
   I have been reading work done at Rice University:  http://
  habanero.rice.edu/cnc. Some work has been done by http://www.cs.rice.edu/
  ~dmp4866/ on CnC for .Net. One component that David wrote a CnC translator 
  that
  translates CnC textual form to the underlying language, e.g. F#. Is anybody
  working on a CnC textual form translator for Haskell so a Haskell user of 
  CnC
  Haskell can write in a higher level??
 
 Ah, so by a translator from high level CnC form to this:
 

 http://hackage.haskell.org/packages/archive/haskell-cnc/latest/doc/hml/Intel-Cnc.html
 
^^ exactly what I mean
  
 ? Do you have a reference for the CnC textual form?
  ^^ if you mean something like a context-free grammatical 
 definition of the CnC textual form ,,, the answer is I haven't seen such a 
 reference.
 
 V.
 
 
 
 -- Don
 
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] proposal: HaBench, a Haskell Benchmark Suite

2010-06-25 Thread Jinjing Wang
poor man's benchmark :)

http://github.com/nfjinjing/bench-euler

multi core aware, use bench-euler +RTS -N2 where 2 means 2 cores, and
watch your cpu fries :)

On Fri, Jun 25, 2010 at 7:24 AM, Andy Georges
andy.geor...@elis.ugent.be wrote:
 Hi Simon et al,


 I've picked up the HaBench/nofib/nobench issue again, needing a decent set of 
 real applications to do some exploring of what people these days call 
 split-compilation. We have a framework that was able to explore GCC 
 optimisations [1] -- the downside there was the dependency of these 
 optimisations on each other, requiring them to be done in certain order -- 
 for a multi-objective search space, and extended this to exploring a JIT 
 compiler [2] for Java in our case -- which posed its own problems. Going one 
 step further, we'd like to  explore the tradeoffs that can be made when 
 compiling on different levels: source to bytecode (in some sense) and 
 bytecode to native. Given that LLVM is quicly becoming a state-of-the-art 
 framework and with the recent GHC support, we figured that Haskell would be 
 an excellent vehicle to conduct our exploration and research (and the fact 
 that some people at our lab have a soft spot for Haskell helps too). Which 
 brings me back to benchmarks.

 Are there any inputs available that allow the real part of the suite to run 
 for a sufficiently long time? We're going to use criterion in any case given 
 our own expertise with rigorous benchmarking [3,4], but since we've made a 
 case in the past against short running apps on managed runtime systems [5], 
 we'd love to have stuff that runs at least in the order of seconds, while 
 doing useful things. All pointers are much appreciated.

 Or if any of you out there have (recent) apps with inputs that are open 
 source ... let us know.

 -- Andy


 [1] COLE: Compiler Optimization Level Exploration, Kenneth Hoste and Lieven 
 Eeckhout, CGO 2008
 [2] Automated Just-In-Time Compiler Tuning, Kenneth Hoste, Andy Georges and 
 Lieven Eeckhout, CGO 2010
 [3] Statistically Rigorous Java Performance Evaluation, Andy Georges, Dries 
 Buytaert and Lieven Eeckhout, OOPSLA 2007
 [4] Java Performance Evaluation through Rigorous Replay Compilation, Andy 
 Georges, Lieven Eeckhout and Dries Buytaert, OOPSLA 2008
 [5] How Java Programs Interact with Virtual Machines at the 
 Microarchitectural Level, Lieven Eeckhout, Andy Georges, Koen De Bosschere, 
 OOPSLA 2003


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe




-- 
jinjing
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Huffman Codes in Haskell

2010-06-25 Thread Thomas Horstmeyer



How would you implement bfnum?  (If you've already read the paper,
what was your first answer?)

Actually, my first thought was Just traverse the tree twice. Then it 
dawned on me, that the tree could be of infinite size and I settled for 
the following solution:


bfsn :: Tree a - Tree Int
bfsn E = E
bfsn t = f [t] [] 0
  where
f :: [Tree a] - [Tree a] - Int - Tree Int
f (T _ E E : xs) ys n = T n E E
f (T _ t1 E : xs) ys n = T n (f (t1:children xs) (children ys) (n + 
length xs + length ys + 1)) E
f (T _ E t2 : xs) ys n = T n E (f (t2:children xs) (children ys) (n 
+ length xs + length ys + 1))

f (T _ t1 t2 : xs) ys n = T n t1' t2'
  where
t1' = f (t1:t2:children xs) (children ys) m
t2' = f (t2:children  xs) (children ys ++ children [t1]) (m+1)
m = length xs + length ys + n + 1

children :: [Tree a] - [Tree a]
children  [] = []
children (E:xs) = children xs
children (T _ E E:xs) = children xs
children (T _ E t2:xs) = t2:children xs
children (T _ t1 E:xs) = t1:children xs
children (T _ t1 t2:xs) = t1:t2:children xs

One could perhaps rewrite it into something more elegant (less cases for 
the f-function, write children as a map), but you wanted the first answer.

I am going to have a look at the paper now...

Thomas

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Read instance for GATD

2010-06-25 Thread Stephen Tetley
Hi Corentin

I think Oleg Kiselyov is parsing / reading into a GADT here:

http://okmij.org/ftp/tagless-final/tagless-typed.html

See the section - Metatypechecking: Staged Typed Compilation into GADT
using typeclasses

http://okmij.org/ftp/Haskell/staged/TypeCheck.hs
http://okmij.org/ftp/Haskell/staged/TermLift.hs

His solution requires quite a bit of machinery - Template Haskell, a
TypeCheck type class...

Best wishes

Stephen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Jason Dagit
On Thu, Jun 24, 2010 at 11:42 PM, Roman Cheplyaka r...@ro-che.info wrote:

 * Jason Dagit da...@codersbase.com [2010-06-24 20:52:03-0700]
  On Sat, Jun 19, 2010 at 1:06 AM, Roman Cheplyaka r...@ro-che.info
 wrote:
 
   While ghc 6.12 finally has proper locale support, core packages (such
 as
   unix) still use withCString and therefore work incorrectly when
 argument
   (e.g. file path) is not ASCII.
  
 
  Pardon me if I'm misunderstanding withCString, but my understanding of
 unix
  paths is that they are to be treated as strings of bytes.  That is,
 unlike
  windows, they do not have an encoding predefined.  Furthermore, you could
  have two filepaths in the same directory with different encodings due to
  this.
 
  In this case, what would be the correct way of handling the paths?
   Converting to a Haskell String would require knowing the encoding,
 right?
   My reasoning is that Haskell Char type is meant to correspond to code
  points so putting them into a string means you have to know their code
 point
  which is different from their (multi-)byte value right?
 
  Perhaps I have some details wrong?  If so, please clarify.

 Jason,

 you got everything right here. So, as you said, there is a mismatch
 between representation in Haskell (list of code points) and
 representation in the operating system (list of bytes), so we need to
 know the encoding. Encoding is supplied by the user via locale
 (https://secure.wikimedia.org/wikipedia/en/wiki/Locale), particularly
 LC_CTYPE variable.

 The problem with encodings is not new -- it was already solved e.g. for
 input/output.


This is the part where I don't understand the problem well.  I thought that
with IO the program assumes the locale of the environment but that with
filepaths you don't know what locale (more specifically which encoding) they
were created with.  So if you try to treat them as having the locale of the
current environment you run the risk of misunderstanding their encoding.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: Réf. : Re: [Haskell-cafe] GHCi and State

2010-06-25 Thread Tillmann Rendel

Hi Corentin,

corentin.dup...@ext.mpsa.com schrieb:

for GHCi, i will try an IORef. Too bad i allready coded it using StateT
GameState IO () extensively through the code ;)


That shouldn't be a problem, you could switch back and forth in 
submitRule, approximately like this:


  startGame :: IO (Rule - IO ())
  startGame = do
gameState - newIORef initialState
return (\rule - wrapStateT (submitRule rule) gameState)

  wrapStateT ::  StateT s IO a - IORef s - IO a
  wrapStateT action ref = do
state - readIORef ref
(answer, state) - runStateT action state
writeIORef ref state
return answer

Now you can start a game with

  mySubmitRule - startGame

and use mySubmitRule at the ghci prompt afterwards.

  Tillmann
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: parameterized-data library v0.1.5

2010-06-25 Thread Seyed Hosein Attarzadeh Niaki
Dear Haskell developers,

A new version of the parameterized-data library providing fixed-sized vectors 
(based on the type-level library) is pushed into the repository and uploaded to 
the Hackage database.

The new version relaxes the Data constraint to Typeable on phantom data 
types used to specify the size of FSVecs. This additional constraint was 
introduced due to an enhancement to the automated instance derivation mechanism 
in GHC 6.12.

Bests,
Hosein Attarzadeh___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] ANN: ForSyDe DSL v3.1.1

2010-06-25 Thread Seyed Hosein Attarzadeh Niaki
Dear All,

A new version of the ForSyDe DSL is uploaded to the Hackage database.

The ForSyDe (Formal System Design) methodology has been developed with the 
objective to move system design to a higher level of abstraction and to bridge 
the abstraction gap by transformational design refinement. This library 
provides ForSyDe's implementation as a Haskell-embedded Domain Specific 
Language (DSL). For more information, please see ForSyDe's website: 
http://www.ict.kth.se/forsyde/.;

This version includes a new Model of Computation (Dataflow) for modeling 
systems in shallow-embedded ForSyDe (intended for simulation). ForSyDe is now 
compatible with GHC = 6.12.2

BR/
Hosein Attarzadeh___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Read instance for GATD

2010-06-25 Thread Edward Kmett
It turns out that defining Read is somewhat tricky to do for a GADT.

Gershom Bazerman has some very nice slides on how to survive the process by
manual typechecking (much in the spirit of Oleg's meta-typechecking code
referenced by Stephen's follow up below)

He presented them at hac-phi this time around.

I will check with him to see if I can get permission to host them somewhere
and post a link to them here.

-Edward Kmett

On Fri, Jun 25, 2010 at 5:04 AM, corentin.dup...@ext.mpsa.com wrote:


 Hello Haskellers,

 I'm having trouble writing a Read Instance for my GATD.
 Arg this GATD!! It causes me more problems that it solves ;)
 Especially with no automatic deriving, it adds a lot of burden to my code.

 data Obs a where
 ProposedBy :: Obs Int   -- The player that proposed the tested
 rule
 Turn   :: Obs Turn  -- The current turn
 Official   :: Obs Bool  -- whereas the tested rule is official
 Equ:: (Eq a, Show a, Typeable a) = Obs a - Obs a - Obs Bool
 Plus   :: (Num a) = Obs a - Obs a - Obs a
 Time   :: (Num a) = Obs a - Obs a - Obs a
 Minus  :: (Num a) = Obs a - Obs a - Obs a
 And:: Obs Bool - Obs Bool - Obs Bool
 Or :: Obs Bool - Obs Bool - Obs Bool
 Not:: Obs Bool - Obs Bool
 Konst  :: a - Obs a


  instance Read a = Read (Obs a) where
  readPrec = (prec 10 $ do
 Ident ProposedBy - lexP
 return (ProposedBy))
  +++
   (prec 10 $ do
 Ident Official - lexP
 return (Official))
   (etc...)

 Observable.lhs:120:8:
Couldn't match expected type `Int' against inferred type `Bool'
  Expected type: ReadPrec (Obs Int)
  Inferred type: ReadPrec (Obs Bool)


 Indeed ProposedBy does not have the same type that Official.
 Mmh how to make it all gently mix altogether?


 Best,
 Corentin


 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Fwd: signficant improvements to the containers package

2010-06-25 Thread David Menendez
On Fri, Jun 25, 2010 at 12:58 AM, Ivan Miljenovic
ivan.miljeno...@gmail.com wrote:
 On 25 June 2010 14:41, David Menendez d...@zednenem.com wrote:
 On Thu, Jun 24, 2010 at 3:08 AM, Ivan Miljenovic
 ivan.miljeno...@gmail.com wrote:
 As an aside, Alex Mason and I are discussing the possibility of taking
 advantage of AusHack *shameless plug* to write some kind of classes
 for the different types of containers with a hierarchy.  I know about
 ListLike, but there doesn't seem to be any applicable classes for
 generic containers (i.e. the abstract API of a Set; something like
 ListLike would then be an extension on it) and for lookup data
 structures (Map k a, [(k, a)], etc.).

 Be sure to look into Okasaki's work on Edison. It has classes for
 sequences (list-like structures) and collections (sets, heaps) and
 associations (maps, priority queues) and a paper discussing the design
 decisions.

 Yeah, we will be.

 The reason this came up: Thomas Berekneyi wanted to use such classes
 for the rewrite of FGL, and when he discussed it on #haskell people
 generally indicated that edison was the best out there but was a bit
 long in the tooth and something like it should be re-written (though
 no-one seemed to volunteer... hmmm... :p).

Edison could use some re-thinking; the state of the art in Haskell has
advanced, and there are new classes like Data.Foldable and
Data.Traversable to consider.

In my mind, the big question is whether Sequence and Assoc should be
constructor classes or use type families/fundeps. I lean towards the
former, but it means that things like ByteSting can't be instances of
Sequence.

 For example: it's a little weird that edison re-exports Data.Set and
 uses it for the instance with a type alias (same with map, Seq, etc.)
 rather than just using Data.Set itself.

I believe that's so the implementations export a common interface. If
you're in a situation where you want to use a specific implementation,
Edison is designed so that you can import just the implementation
module and avoid the overhead of the class system while still making
it easy to switch implementations in the future.

  I also find the
 structuralInvariant and instanceName fields to be a little odd,

I believe structuralInvariant is there for testing.

I'm not sure what instanceName is for. It's possible Edison predates
Data.Typeable, in which case instanceName might be useful for similar
purposes.

-- 
Dave Menendez d...@zednenem.com
http://www.eyrie.org/~zednenem/
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Why my program runs out of memory?

2010-06-25 Thread Vladimir Ch.
I'm trying to solve this problem:
http://projecteuler.net/index.php?section=problemsid=44, and my
program runs out of memory. I was wondering where I have a space leak,
I thought that this program doesn't accumulate any data structures
(i.e. should use constant memory):

CODE BEGINS

-- returns list of pairs (i, j) of indices, for which differences (gen
j - gen i)
-- increase, provided that gen has the following property:
-- gen (i+1) - gen i  gen i - gen (i - 1)
incDiffPairs gen = iterate next (0, 1)
   where next (i, j) = if (i == 0) || gen (j+1) - gen
j  gen j - gen (i - 1)
  then (j, j + 1)
  else (i - 1, j)

-- returns ith pentagonal number
pentagonal i = let n = i + 1
   in n * (3 * n - 1) `div` 2

-- tests if a number is pentagonal
isPentagonal x = let d = 24 * x + 1
 in isSquare d  (1 + sqrti d) `mod` 6 == 0

result44 = head [(pentagonal j - pentagonal i, (i, j)) | (i, j) -
incDiffPairs pentagonal,
 isPentagonal (pentagonal j - pentagonal i),
 isPentagonal (pentagonal j + pentagonal i)]

CODE ENDS
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why my program runs out of memory?

2010-06-25 Thread Daniel Fischer
On Friday 25 June 2010 20:51:44, Vladimir Ch. wrote:
 I'm trying to solve this problem:
 http://projecteuler.net/index.php?section=problemsid=44, and my
 program runs out of memory.

Compile with optimisations. With -O2, after adding fairly naive isSquare 
and sqrti functions, it seems to run in constant memory (but very slowly, 
you need a better algorithm, really 8-) )

 I was wondering where I have a space leak,

Profiling would tell you where the memory goes.

 I thought that this program doesn't accumulate any data structures
 (i.e. should use constant memory):

 CODE BEGINS

 -- returns list of pairs (i, j) of indices, for which differences (gen
 j - gen i)
 -- increase, provided that gen has the following property:
 -- gen (i+1) - gen i  gen i - gen (i - 1)
 incDiffPairs gen = iterate next (0, 1)
where next (i, j) = if (i == 0) || gen (j+1) - gen
 j  gen j - gen (i - 1)
   then (j, j + 1)
   else (i - 1, j)

 -- returns ith pentagonal number
 pentagonal i = let n = i + 1
in n * (3 * n - 1) `div` 2

 -- tests if a number is pentagonal
 isPentagonal x = let d = 24 * x + 1
  in isSquare d  (1 + sqrti d) `mod` 6 == 0

 result44 = head [(pentagonal j - pentagonal i, (i, j)) | (i, j) -
 incDiffPairs pentagonal,
  isPentagonal (pentagonal j - pentagonal i),
  isPentagonal (pentagonal j + pentagonal i)]

 CODE ENDS
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why my program runs out of memory?

2010-06-25 Thread Daniel Fischer
On Friday 25 June 2010 20:51:44, Vladimir Ch. wrote:
 I'm trying to solve this problem:
 http://projecteuler.net/index.php?section=problemsid=44, and my
 program runs out of memory. I was wondering where I have a space leak,
 I thought that this program doesn't accumulate any data structures
 (i.e. should use constant memory):

 CODE BEGINS

 -- returns list of pairs (i, j) of indices, for which differences (gen
 j - gen i)
 -- increase, provided that gen has the following property:
 -- gen (i+1) - gen i  gen i - gen (i - 1)

Oh, and: this list is not complete, e.g. (0, j) doesn't appear in the list 
for j  1, so the algorithm isn't correct.

 incDiffPairs gen = iterate next (0, 1)
where next (i, j) = if (i == 0) || gen (j+1) - gen
 j  gen j - gen (i - 1)
   then (j, j + 1)
   else (i - 1, j)

 -- returns ith pentagonal number
 pentagonal i = let n = i + 1
in n * (3 * n - 1) `div` 2

 -- tests if a number is pentagonal
 isPentagonal x = let d = 24 * x + 1
  in isSquare d  (1 + sqrti d) `mod` 6 == 0

 result44 = head [(pentagonal j - pentagonal i, (i, j)) | (i, j) -
 incDiffPairs pentagonal,
  isPentagonal (pentagonal j - pentagonal i),
  isPentagonal (pentagonal j + pentagonal i)]

 CODE ENDS

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Manual Type-Checking to provide Read instances for GADTs. (was Re: [Haskell-cafe] Read instance for GATD)

2010-06-25 Thread Edward Kmett
I've obtained permission to repost Gershom's slides on how to deserialize
GADTs by typechecking them yourself, which are actually a literate haskell
source file, that he was rendering to slides. Consequently, I've just pasted
the content below as a literate email.

-Edward Kmett

-
Deserializing strongly typed values

(four easy pieces about typechecking)

Gershom Bazerman

-
prior art:
Oleg (of course)
http://okmij.org/ftp/tagless-final/course/course.html

...but also
Stephanie Weirich
http://www.comlab.ox.ac.uk/projects/gip/school/tc.hs

=
Ahem...
\begin{code}
{-# LANGUAGE DeriveDataTypeable,
 ExistentialQuantification,
 FlexibleContexts,
 FlexibleInstances,
 FunctionalDependencies,
 GADTs,
 RankNTypes,
 ScopedTypeVariables
 #-}
\end{code}
=
ahem.
\begin{code}
import Data.Typeable
import Data.Maybe
import Control.Monad.Error
import Control.Applicative
import qualified Data.Map as M
import Unsafe.Coerce
\end{code}
=
A simple ADT.

\begin{code}
data SimpleExpr = SOpBi String SimpleExpr SimpleExpr
| SOpUn String SimpleExpr
| SDbl Double
| SBool Bool deriving (Read, Show, Typeable)

\end{code}
Yawn.
=
An awesome GADT!

\begin{code}
data Expr a where
EDbl  :: Double - Expr Double
EBool :: Bool - Expr Bool
EBoolOpBi :: BoolOpBi - Expr Bool - Expr Bool - Expr Bool
ENumOpBi  :: NumOpBi - Expr Double - Expr Double - Expr Double
ENumOpUn  :: NumOpUn - Expr Double - Expr Double
 deriving Typeable

data NumOpBi = Add | Sub deriving (Eq, Show)
data NumOpUn = Log | Exp deriving (Eq, Show)
data BoolOpBi = And | Or deriving (Eq, Show)
\end{code}

The GADT is well typed. It cannot go wrong.
-
It also cannot derive show.
=
But we can write show...

\begin{code}
showIt :: Expr a - String
showIt (EDbl d) = show d
showIt (EBool b) = show b
showIt (EBoolOpBi op x y) = ( ++ show op
++   ++ showIt x
++   ++ showIt y ++ )
showIt (ENumOpBi op x y)  = ( ++ show op
++   ++ showIt x
++   ++ showIt y ++ )
showIt (ENumOpUn op x) = show op ++ ( ++ showIt x ++ )
\end{code}
=
And eval is *much nicer*.
It cannot go wrong -- no runtime typechecks.

\begin{code}
evalIt :: Expr a - a
evalIt (EDbl x) = x
evalIt (EBool x) = x
evalIt (EBoolOpBi op expr1 expr2)
   | op == And = evalIt expr1  evalIt expr2
   | op == Or  = evalIt expr2 || evalIt expr2

evalIt (ENumOpBi op expr1 expr2)
   | op == Add = evalIt expr1 + evalIt expr2
   | op == Sub = evalIt expr1 - evalIt expr2
\end{code}
=
But how do we write read!?

read EBool False = Expr Bool
read EDbl 12 = Expr Double

The type being read depends on the content of the string.

Even worse, we want to read not from a string that looks obvious
to Haskell (i.e. a standard showlike instance) but from
something that looks pretty to the user -- we want to *parse*.

So we parse into our simple ADT.

Then we turn our simple ADT into our nice GADT.
-
But how?

How do we go from untyped... to typed?

[And in general -- not just into an arbitrary GADT,
but an arbitrary inhabitant of a typeclass.]

[i.e. tagless final, etc]

=
Take 1:
Even if we do not know what type we are creating,
we eventually will do something with it.

So we paramaterize our typechecking function over
an arbitrary continuation.

\begin{code}
mkExpr :: (forall a. (Show a, Typeable a) = Expr a - r) - SimpleExpr - r
mkExpr k expr = case expr of
   SDbl d  - k $ EDbl d
   SBool b - k $ EBool b
   SOpUn op expr1 - case op of
  log - k $ mkExpr' (ENumOpUn Log) expr1
  exp - k $ mkExpr' (ENumOpUn Exp) expr1
  _ - error bad unary op
   SOpBi op expr1 expr2 - case op of
  add - k $ mkExprBi (ENumOpBi Add) expr1 expr2
  sub - k $ mkExprBi (ENumOpBi Sub) expr1 expr2
\end{code}
=
Where's the typechecking?

\begin{code}
mkExpr' k expr = mkExpr (appCast $ k) expr


mkExprBi k expr1 expr2 = mkExpr' (mkExpr' k expr1) expr2


appCast :: forall a b c r. (Typeable a, Typeable b) = (c a - r) - c b -
r
appCast f x = maybe err f $ gcast x
where err = error $ Type error. Expected:  ++ show (typeOf
(undefined::a))
++ , Inferred:  ++ show (typeOf (undefined::b))
\end{code}

... We let Haskell do all the work!
=
Hmmm... the continuation can be anything.
So let's just make it an existential constructor.

\begin{code}
data ExprBox = forall a. Typeable a = ExprBox (Expr a)

appExprBox :: (forall a. Expr a - res) - ExprBox - res
appExprBox f (ExprBox x) = f x

tcCast :: forall a b c. (Typeable a, Typeable b) = Expr a - Either String
(Expr b)
tcCast x = maybe err Right $ gcast x
where err = Left $ Type error. Expected:  ++ show (typeOf
(undefined::a))
++ , Inferred:  ++ show 

Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Roman Cheplyaka
* Jason Dagit da...@codersbase.com [2010-06-25 10:09:21-0700]
 On Thu, Jun 24, 2010 at 11:42 PM, Roman Cheplyaka r...@ro-che.info wrote:
 
  * Jason Dagit da...@codersbase.com [2010-06-24 20:52:03-0700]
   On Sat, Jun 19, 2010 at 1:06 AM, Roman Cheplyaka r...@ro-che.info
  wrote:
  
While ghc 6.12 finally has proper locale support, core packages (such
  as
unix) still use withCString and therefore work incorrectly when
  argument
(e.g. file path) is not ASCII.
   
  
   Pardon me if I'm misunderstanding withCString, but my understanding of
  unix
   paths is that they are to be treated as strings of bytes.  That is,
  unlike
   windows, they do not have an encoding predefined.  Furthermore, you could
   have two filepaths in the same directory with different encodings due to
   this.
  
   In this case, what would be the correct way of handling the paths?
Converting to a Haskell String would require knowing the encoding,
  right?
My reasoning is that Haskell Char type is meant to correspond to code
   points so putting them into a string means you have to know their code
  point
   which is different from their (multi-)byte value right?
  
   Perhaps I have some details wrong?  If so, please clarify.
 
  Jason,
 
  you got everything right here. So, as you said, there is a mismatch
  between representation in Haskell (list of code points) and
  representation in the operating system (list of bytes), so we need to
  know the encoding. Encoding is supplied by the user via locale
  (https://secure.wikimedia.org/wikipedia/en/wiki/Locale), particularly
  LC_CTYPE variable.
 
  The problem with encodings is not new -- it was already solved e.g. for
  input/output.
 
 
 This is the part where I don't understand the problem well.  I thought that
 with IO the program assumes the locale of the environment but that with
 filepaths you don't know what locale (more specifically which encoding) they
 were created with.  So if you try to treat them as having the locale of the
 current environment you run the risk of misunderstanding their encoding.

Sure you do. But there is no other source of encoding information apart
from the current locale. So UNIX (currently) puts the responsibility on
the user.

It's hard to give convincing examples demonstrating this semantics
because UNIX userspace is mostly written in C and there char is just a
byte, so most of them don't bother with encoding and decoding.

Difference between IO and filenames is vague -- what if you pipe ls(1)
to some program? Since ls does no recoding, encoding filenames
differently from locale is a bad idea.

By the way, GTK (which internally uses UTF-8 for strings) treats this
problem differently -- it has special variable G_FILENAME_ENCODING and
also G_BROKEN_FILENAMES (which means that filenames are encoded as
locale says). I have no clue how their G_* variables are better than our
conventional LC_* variables though.
http://www.gtk.org/api/2.6/glib/glib-Character-Set-Conversion.html

-- 
Roman I. Cheplyaka :: http://ro-che.info/
Don't let school get in the way of your education. - Mark Twain
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/25/10 17:05 , Roman Cheplyaka wrote:
 By the way, GTK (which internally uses UTF-8 for strings) treats this
 problem differently -- it has special variable G_FILENAME_ENCODING and
 also G_BROKEN_FILENAMES (which means that filenames are encoded as
 locale says). I have no clue how their G_* variables are better than our
 conventional LC_* variables though.
 http://www.gtk.org/api/2.6/glib/glib-Character-Set-Conversion.html

I would assume what they really mean by this is that the filename encoding
should be part of the file metadata and G_BROKEN_FILENAMES means it isn't.
G_FILENAME_ENCODING would then be the encoding used when creating new files.

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkwlHcAACgkQIn7hlCsL25X1SQCgq6z+2CbiPbw4ECSABZaKmAhU
2PgAoLcK2SQAeyvLqWnr7cEz3uMCN98C
=kdp+
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Type-Level Programming

2010-06-25 Thread Walt Rorie-Baety
I've noticed over the - okay, over the months - that some folks enjoy the
puzzle-like qualities of programming in the type system (poor Oleg, he's
become #haskell's answer to the Chuck Norris meme commonly encountered in
MMORPGs).

Anyway,... are there any languages out there whose term-level programming
resembles Haskell type-level programming, and if so, would a deliberate
effort to appeal to that resemblance be an advantage (leaving out for now
the hair-pulling effort that such a change would entail)?

Or, better yet, is there an Interest Group or committee (Working, or not),
that is looking at a coherent architecture or design for a possible future
version of Haskell (no offense to Tim Sheard's excelent Ωmega project)?


Walt BMeph Rorie-Baety
A mountain that eats people? I want one!. - Richard, of LFGComic.com
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Construction of short vectors

2010-06-25 Thread Felipe Lessa
On Fri, Jun 25, 2010 at 12:41:48AM +0400, Alexey Khudyakov wrote:
 Then constructor like one below arise naturally. And I don't know how to write
 them properly. It's possible to use fromList but then list could be allocated
 which is obviously wasteful.

Did you see the generated core?  I think you should give a try to
the following simple code:

  import qualified Data.Vector.Generic as VG -- vector == 0.6.*

  vector2 :: Double - Double - Vec2D
  vector2 x y = Vec2D (VG.fromListN 2 [x,y])

 Another question is there any specific problems with short vectors? They could
 be just 2 elements long. I mean performance problems

Probably there will be more overhead than defining

  data Vec2D = Vec2D {-# UNPACK #-} !Double
 {-# UNPACK #-} !Double

You should profile to see how much difference there is between
those representations.

Cheers,

--
Felipe.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Roman Cheplyaka
* Brandon S Allbery KF8NH allb...@ece.cmu.edu [2010-06-25 05:00:08-0400]
 On 6/25/10 02:42 , Roman Cheplyaka wrote:
  * Jason Dagit da...@codersbase.com [2010-06-24 20:52:03-0700]
  On Sat, Jun 19, 2010 at 1:06 AM, Roman Cheplyaka r...@ro-che.info wrote:
  While ghc 6.12 finally has proper locale support, core packages (such as
  unix) still use withCString and therefore work incorrectly when argument
  (e.g. file path) is not ASCII.
 
  Pardon me if I'm misunderstanding withCString, but my understanding of unix
  paths is that they are to be treated as strings of bytes.  That is, unlike
  windows, they do not have an encoding predefined.  Furthermore, you could
  have two filepaths in the same directory with different encodings due to
  this.
  
  you got everything right here. So, as you said, there is a mismatch
  between representation in Haskell (list of code points) and
  representation in the operating system (list of bytes), so we need to
  know the encoding. Encoding is supplied by the user via locale
  (https://secure.wikimedia.org/wikipedia/en/wiki/Locale), particularly
  LC_CTYPE variable.
 
 You might want to look at how Python is dealing with this (including the
 pain involved; best to learn from example).

Do you mean the pain when filenames can not be decoded using current
locale settings and thus the files are not accessible? (The same about
environment variables.)

Agreed, it's unpleasant. The other way would be changing [Char] to [Word8]
or ByteString. But this would a) break all existing programs and b) be
an OS-specific hack. Crap.

Brandon, do you have any ideas on how we should proceed with this?

-- 
Roman I. Cheplyaka :: http://ro-che.info/
Don't let school get in the way of your education. - Mark Twain
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Brandon S Allbery KF8NH
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 6/25/10 17:56 , Roman Cheplyaka wrote:
 * Brandon S Allbery KF8NH allb...@ece.cmu.edu [2010-06-25 05:00:08-0400]
 You might want to look at how Python is dealing with this (including the
 pain involved; best to learn from example).
 
 Do you mean the pain when filenames can not be decoded using current
 locale settings and thus the files are not accessible? (The same about
 environment variables.)

Yes, this.

 Agreed, it's unpleasant. The other way would be changing [Char] to [Word8]
 or ByteString. But this would a) break all existing programs and b) be
 an OS-specific hack. Crap.

But it *is* OS-specific, just as Windows' UTF-16 is an OS-specific
mechanism.  Unfortunately, there's no good solution in the Unix case aside
from assuming a specific encoding, and the locale is as good as any; but I
think LC_CTYPE is probably the most applicable.  This will, however, confuse
everyone else.

Perhaps best is to look at whether there is any consensus building as to how
to resolve it, and if not use locale but document it as an unstable
interface.  Or possibly just leave things as is until consensus develops.
It would be Bad to choose one (say, locale) only to have everyone else go in
a different direction (say, UTF-8 with the application libraries potentially
re-encoding filenames).

(The flip side of *that*, of course, is that everyone else (save GvR) may be
waiting for the same thing.  Which is why we look around first.  At worst it
may be a reason to push for something, perhaps as part of LSB and then
assumed anywhere that doesn't have its own solution.  I think the main
issues there would be *BSD, which is used to being on the wrong end of the
Linux stick, and Solaris which Oracle has helpfully (effectively; nobody
manning the rudders) nuked from orbit. :/ )

- -- 
brandon s. allbery [linux,solaris,freebsd,perl]  allb...@kf8nh.com
system administrator  [openafs,heimdal,too many hats]  allb...@ece.cmu.edu
electrical and computer engineering, carnegie mellon university  KF8NH
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (Darwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAkwlKo8ACgkQIn7hlCsL25W4YwCfYeWFXWMiE6FqoODYVNv4jK4c
LusAnRwi839s9l6bnNj7tcXUTu1i1BGU
=7L0U
-END PGP SIGNATURE-
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-25 Thread Jason Dagit
On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety black.m...@gmail.comwrote:

 I've noticed over the - okay, over the months - that some folks enjoy the
 puzzle-like qualities of programming in the type system (poor Oleg, he's
 become #haskell's answer to the Chuck Norris meme commonly encountered in
 MMORPGs).

 Anyway,... are there any languages out there whose term-level programming
 resembles Haskell type-level programming, and if so, would a deliberate
 effort to appeal to that resemblance be an advantage (leaving out for now
 the hair-pulling effort that such a change would entail)?


I'm not a prolog programmer, but I've heard that using type classes to do
your computations leads to code that resembles prolog.  You might enjoy
looking at dependently typed languages.  In those languages the term level
and type level have the same computing power so your programs will go
between the levels at times.  In Agda they share the same syntax even, I
think.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Huffman Codes in Haskell

2010-06-25 Thread John Lato
Thanks for providing this.  I think this is pretty close to one of the
solutions he presents in the paper.

It seems that part of what prompted Okasaki to investigate this
further was trying to solve the problem in a strict language (ML)
after he saw a solution which relies upon laziness.  What I think is
especially interesting about that is AFAICT his proposed solution
would work with an infinite tree, assuming the language supports lazy
data structures.

On Fri, Jun 25, 2010 at 11:03 AM, Thomas Horstmeyer
horst...@mathematik.uni-marburg.de wrote:

 How would you implement bfnum?  (If you've already read the paper,
 what was your first answer?)

 Actually, my first thought was Just traverse the tree twice. Then it
 dawned on me, that the tree could be of infinite size and I settled for the
 following solution:

 bfsn :: Tree a - Tree Int
 bfsn E = E
 bfsn t = f [t] [] 0
  where
    f :: [Tree a] - [Tree a] - Int - Tree Int
    f (T _ E E : xs) ys n = T n E E
    f (T _ t1 E : xs) ys n = T n (f (t1:children xs) (children ys) (n +
 length xs + length ys + 1)) E
    f (T _ E t2 : xs) ys n = T n E (f (t2:children xs) (children ys) (n +
 length xs + length ys + 1))
    f (T _ t1 t2 : xs) ys n = T n t1' t2'
      where
        t1' = f (t1:t2:children xs) (children ys) m
        t2' = f (t2:children  xs) (children ys ++ children [t1]) (m+1)
        m = length xs + length ys + n + 1

 children :: [Tree a] - [Tree a]
 children  [] = []
 children (E:xs) = children xs
 children (T _ E E:xs) = children xs
 children (T _ E t2:xs) = t2:children xs
 children (T _ t1 E:xs) = t1:children xs
 children (T _ t1 t2:xs) = t1:t2:children xs

 One could perhaps rewrite it into something more elegant (less cases for the
 f-function, write children as a map), but you wanted the first answer.
 I am going to have a look at the paper now...

 Thomas


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Core packages and locale support

2010-06-25 Thread Jason Dagit
On Fri, Jun 25, 2010 at 3:15 PM, Brandon S Allbery KF8NH 
allb...@ece.cmu.edu wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 On 6/25/10 17:56 , Roman Cheplyaka wrote:
  * Brandon S Allbery KF8NH allb...@ece.cmu.edu [2010-06-25
 05:00:08-0400]
  You might want to look at how Python is dealing with this (including the
  pain involved; best to learn from example).
 
  Do you mean the pain when filenames can not be decoded using current
  locale settings and thus the files are not accessible? (The same about
  environment variables.)

 Yes, this.

  Agreed, it's unpleasant. The other way would be changing [Char] to
 [Word8]
  or ByteString. But this would a) break all existing programs and b) be
  an OS-specific hack. Crap.

 But it *is* OS-specific, just as Windows' UTF-16 is an OS-specific
 mechanism.  Unfortunately, there's no good solution in the Unix case aside
 from assuming a specific encoding, and the locale is as good as any; but I
 think LC_CTYPE is probably the most applicable.  This will, however,
 confuse
 everyone else.

 Perhaps best is to look at whether there is any consensus building as to
 how
 to resolve it, and if not use locale but document it as an unstable
 interface.  Or possibly just leave things as is until consensus develops.
 It would be Bad to choose one (say, locale) only to have everyone else go
 in
 a different direction (say, UTF-8 with the application libraries
 potentially
 re-encoding filenames).


In the case of IO you can disable the locale specific encoding/decoding by
switching to binary mode.  Would a similar API be available when working
with filepaths?  Darcs, for instance, deals with lots of file paths and has
very specific requirements.  Losing access to files due to bad encodings, or
mistaken encodings, is the sort of thing that would break some people's
repositories.  So tools like Darcs would probably need a way to disable this
sort of automatic encoding/decoding.

Jason
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-25 Thread wren ng thornton

Jason Dagit wrote:

On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety black.m...@gmail.comwrote:


I've noticed over the - okay, over the months - that some folks enjoy the
puzzle-like qualities of programming in the type system (poor Oleg, he's
become #haskell's answer to the Chuck Norris meme commonly encountered in
MMORPGs).

Anyway,... are there any languages out there whose term-level programming
resembles Haskell type-level programming, and if so, would a deliberate
effort to appeal to that resemblance be an advantage (leaving out for now
the hair-pulling effort that such a change would entail)?


I'm not a prolog programmer, but I've heard that using type classes to do
your computations leads to code that resembles prolog.


Indeed. If you like the look of Haskell's type-level programming, you 
should look at logic programming languages based on Prolog. Datalog 
gives a well understood fragment of Prolog. ECLiPSe[1] extends Prolog 
with constraint programming. Mercury[2], lambda-Prolog[3], and Dyna give 
a more modern take on the paradigm.


If you're just a fan of logic variables and want something more 
Haskell-like, there is Curry[4]. In a similar vein there's also 
AliceML[5] which gives a nice futures/concurrency story to ML. AliceML 
started out on the same VM as Mozart/Oz[6], which has similar futures, 
though a different overall programming style.


And, as Jason said, if you're just interested in having the same 
programming style at both term and type levels, then you should look 
into dependently typed languages. Agda is the most Haskell-like, Epigram 
draws heavily from the Haskell community, and Coq comes more from the ML 
tradition. There's a menagerie of others too, once you start looking.



[1] http://eclipse-clp.org/ is currently down, but can be accessed at 
http://87.230.22.228/

[2] http://www.mercury.csse.unimelb.edu.au/
[3] http://www.lix.polytechnique.fr/~dale/lProlog/
[4] http://www-ps.informatik.uni-kiel.de/currywiki/
[5] http://www.ps.uni-saarland.de/alice/
[6] http://www.mozart-oz.org/

--
Live well,
~wren
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] building a patched ghc

2010-06-25 Thread braver
How do I get the latest 6.12 source and cherry-pick selected patches
from the trunk, e.g. Simon's GC fixes?  The wiki says that you get the
head with

darcs get --lazy http://darcs.haskell.org/ghc

and the 6.12 with

darcs get --lazy http://darcs.haskell.org/ghc-6.12/ghc

Since the trunk is big enough, I wonder if 6.12 a subset of it and can
be obtained from it?  The tags are not informative.  If I get the
branch, how will I be able to get the patches from the head, once
they're separate repos?

-- Alexy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Type-Level Programming

2010-06-25 Thread Gregory Crosswhite
Are any of those compatible with Haskell, so that we could mix code in 
that language with Haskell code?


Cheers,
Greg

On 6/25/10 9:49 PM, wren ng thornton wrote:

Jason Dagit wrote:
On Fri, Jun 25, 2010 at 2:26 PM, Walt Rorie-Baety 
black.m...@gmail.comwrote:


I've noticed over the - okay, over the months - that some folks 
enjoy the
puzzle-like qualities of programming in the type system (poor Oleg, 
he's
become #haskell's answer to the Chuck Norris meme commonly 
encountered in

MMORPGs).

Anyway,... are there any languages out there whose term-level 
programming

resembles Haskell type-level programming, and if so, would a deliberate
effort to appeal to that resemblance be an advantage (leaving out 
for now

the hair-pulling effort that such a change would entail)?


I'm not a prolog programmer, but I've heard that using type classes 
to do

your computations leads to code that resembles prolog.


Indeed. If you like the look of Haskell's type-level programming, you 
should look at logic programming languages based on Prolog. Datalog 
gives a well understood fragment of Prolog. ECLiPSe[1] extends Prolog 
with constraint programming. Mercury[2], lambda-Prolog[3], and Dyna 
give a more modern take on the paradigm.


If you're just a fan of logic variables and want something more 
Haskell-like, there is Curry[4]. In a similar vein there's also 
AliceML[5] which gives a nice futures/concurrency story to ML. AliceML 
started out on the same VM as Mozart/Oz[6], which has similar futures, 
though a different overall programming style.


And, as Jason said, if you're just interested in having the same 
programming style at both term and type levels, then you should look 
into dependently typed languages. Agda is the most Haskell-like, 
Epigram draws heavily from the Haskell community, and Coq comes more 
from the ML tradition. There's a menagerie of others too, once you 
start looking.



[1] http://eclipse-clp.org/ is currently down, but can be accessed at 
http://87.230.22.228/

[2] http://www.mercury.csse.unimelb.edu.au/
[3] http://www.lix.polytechnique.fr/~dale/lProlog/
[4] http://www-ps.informatik.uni-kiel.de/currywiki/
[5] http://www.ps.uni-saarland.de/alice/
[6] http://www.mozart-oz.org/



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: building a patched ghc

2010-06-25 Thread braver
An attempt to build the trunk gets me this:

/opt/portage/usr/lib/gcc/x86_64-pc-linux-gnu/4.2.4/../../../../x86_64-
pc-linux-gnu/bin/ld: rts/dist/build/RtsStartup.dyn_o: relocation
R_X86_64_PC32 against symbol `StgRun' can not be used when making a
shared object; recompile with -fPIC

-- I use prefix portage on a CentOS box, admittedly a non-standard
setup.  Its gcc is found first and it wants -fPIC...  Should I just
add it to CFLAGS or what?

-- Alexy
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe