Re: [Haskell-cafe] Copy on read

2008-05-13 Thread Dan Piponi
On Sat, May 10, 2008 at 7:20 AM, Neil Mitchell [EMAIL PROTECTED] wrote:
  Jurriaan Hage and Stefan Holdermans. Heap recycling for lazy
  languages. In John Hatcliff, Robert Glück, and Oege de Moor, editors,
  _Proceedings of the 2008 ACM SIGPLAN Symposium on Partial Evaluation
  and Semantics-Based Program Manipulation_, PEPM'08, San Francisco,
  California, USA, January 7--8, 2008, pages 189--197. ACM Press, 2008.
  http://doi.acm.org/10.1145/1328408.1328436.

If I'm reading it aright, it works by tagging the types of values as
unique or not but keeps that annotation secret from the programmer.
The typing rules are incredibly complicated so to make things less
scary, they are also hidden from the user. As a result, this idea
makes it impossible for a developer to reason about whether their code
will compile. That doesn't seem like a viable solution. Have I read
this correctly?

Cute idea though.
--
Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Albert Y. C. Lai

Advanced technology ought to look like unpredictable magic.

My experience with lazy evaluation is such that every time a program is 
slower or bulkier than I presumed, it is not arbitrariness, it is 
something new to learn.


My experience with GHC is such that every surprise it gives me is a 
pleasant surprise: it produces a program faster or leaner than lazy 
evaluation would have it. Where has the box gone?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Spencer Janssen
On Mon, May 12, 2008 at 08:01:53PM +0100, Andrew Coppin wrote:
 I offer up the following example:

  mean xs = sum xs / length xs

 Now try, say, mean [1.. 1e9], and watch GHC eat several GB of RAM. (!!)

I don't see why the performance implications of this program are surprising.
Just ask any programmer used to a strict language how much memory [1 .. 1e9]
will require.

 If we now rearrange this to

  mean = (\(s,n) - s / n) . foldr (\x (s,n) - let s' = s+x; n' = n+1 in 
 s' `seq` n' `seq` (s', n')) (0,0)

 and run the same example, and watch it run in constant space.

This will use linear stack space.  You probably meant to use foldl'?

Better:

mean = uncurry (/) . foldl' f (0, 0)
 where f (!s, !n) x = (s + x, n + 1)

   -- or, if you require Haskell '98:
   f (s, n) x = s `seq` n `seq` (s + x, n + 1)

This version is very legible in my opinion.  In fact, the algorithm is
identical to what I'd write in C.  Also, mean [1 .. 1e9] will actually work
in Haskell, while in C you'll just run out of memory.


Cheers,
Spencer Janssen
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread J C
On Mon, May 12, 2008 at 4:38 AM, Richard Kelsall
[EMAIL PROTECTED] wrote:

  Hello JC, I think you've set yourself a challenge there :) Welcome to
  Haskell programming. Taking a Shootout entry and playing with it is
  a great way to learn Haskell. The Shootout provides an example in your
  favourite previous language for comparison and a small well defined
  program with exact test results you can pit your wits against. Fame
  awaits you for a fast and beautiful entry. I'm still learning useful
  things from the Fasta benchmark. It's surprising how many interesting
  things you can discover in a small piece of code.

It may be fun, but I don't think it would be meaningful. My code will
be, most likely, slow, leaving some doubt as to whether it's slow
because of the limitations of the compiler or my inexperience.

On the other hand, if the experts can't help using malloc, unsafe*,
global mutables and IO, I'll be able to conclude that this is probably
what it takes to make Haskell run fast :-(
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Short circuiting and the Maybe monad

2008-05-13 Thread Abhay Parvate
Yes, I had always desired that the operator = should have been right
associative for this short cut even when written without the 'do' notation.

On Tue, May 13, 2008 at 3:39 AM, John Hamilton [EMAIL PROTECTED] wrote:

 I'm trying to understand how short circuiting works with the Maybe monad.
 Take the expression n = f = g = h, which can be written as
 (((n = f) = g) = h) because = is left associative.  If n is
 Nothing, this implies that (n = f) is Nothing, and so on, each nested
 sub-expression easily evaluating to Nothing, but without there being a
 quick way to short circuit at the beginning.

 Now take the example

   do x - xs
  y - ys
  z - zs
  return (x, y, z)

 which I believe desugars like

xs = (\x - ys = (\y - zs = (\z - return (x, y, z

 Here the associativity of = no longer matters, and if xs is Nothing the
 whole expression can quickly be determined to be Nothing, because Nothing
 = _ = Nothing.  Am I looking at this correctly?

 - John
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Neal Alexander
Just a heads up - i only have a month or so experience with Haskell, so 
alot of these issues may be my own fault.


Anyway, the log file that i'm parsing uses English grammar, and the 
performance really dropped just by upgrading to Parsec3. I was hoping to 
use the ByteString support to boost the speed of already slow code, but 
had no such luck. It basicly went from Ugh, this is kinda slow to 
U i'm gonna go grab a burger and let this melt my CPU haha.


If anything, its probably all the look-ahead the rules have to do to get 
the context specific stuff right.


Some of the code is here: http://hpaste.org/7578

-
#1 . Parsec 2:

total time  =   46.44 secs   (2322 ticks @ 20 ms)
total alloc = 16,376,179,008 bytes  (excl. profiling overheads)

Parse taking 51.3% time and 65.3% alloc.
-
-
#2 . Parsec3
(4 times slower, no code changes):

total time  =  181.08 secs   (9054 ticks @ 20 ms)
total alloc = 46,002,859,656 bytes  (excl. profiling overheads)

Text.Parsec.Prim  Taking 84.7% time and 86.0% alloc.
-
-
#3 . Parsec3 but with the whole project converted to ByteString:
(8 times slower):

total time  =  378.22 secs   (18911 ticks @ 20 ms)
total alloc = 100,051,417,672 bytes  (excl. overheads)
-

The third parse probably isn't a great indicator, since i reverted some 
rule-set optimizations that were causing errors. Plus i ended up packing 
the parsec String results to ByteStrings to fit in with everything else.



I can post the full profiling info if anyone really cares.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Don Stewart
gale:
 Andrew Coppin wrote:
   I offer up the following example:
 
mean xs = sum xs / length xs
 
   Now try, say, mean [1.. 1e9], and watch GHC eat several GB of RAM. (!!)
 
   If we now rearrange this to
 
mean = (\(s,n) - s / n) . foldr (\x (s,n) - let s' = s+x; n' = n+1 in s'
  `seq` n' `seq` (s', n')) (0,0)
 
   and run the same example, and watch it run in constant space.
 
   Of course, the first version is clearly readable, while the second one is
  almost utterly incomprehensible, especially to a beginner. (It's even more
  fun that you need all those seq calls in there to make it work properly.)
 
 You can write it like this:
 
 mean = uncurry (/) . foldl' (\(s,n) x - ((,) $! s+x) $! n+1) (0,0)
 
 I don't think that's so bad. And for real-life examples, you almost
 never need the ($!)'s or seq's - your function will do some kind
 of pattern matching that will force the arguments. So really, all
 you need to remember is: if you're repeating a fast calculation across
 a big list, use foldl'. And insertWith', if you're storing the result in
 a Data.Map. That's about it.
 
   The sad fact is that if you just write something in Haskell in a nice,
  declarative style, then roughly 20% of the time you get good performance,
  and 80% of the time you get laughably poor performance.
 
 I don't know why you think that. I've written a wide variety of functions
 over the past few years. I find that when performance isn't good enough,
 it's because of the algorithm, not because of laziness. Laziness
 works for me, not against me.
 
 Of course, it depends what you mean by good performance. I have
 never needed shootout-like performance. But to get that, you need
 some messy optimization in any language.

We can actually get great performance here,

{-# LANGUAGE TypeOperators #-}

import Data.Array.Vector
import Text.Printf

mean :: UArr Double - Double
mean arr = b / fromIntegral a
  where
k (n :*: s) a = n+1 :*: s+a
a :*: b = foldlU k (0 :*: 0) arr :: (Int :*: Double)

main = printf %f\n . mean $ enumFromToFracU 1 1e9

ghc -O2

$ time ./A
5.067109
./A  3.69s user 0.00s system 99% cpu 3.692 total

Versus on lists:

import Data.List
import Text.Printf
import Data.Array.Vector

mean :: [Double] - Double
mean arr = b / fromIntegral a
  where
k (n :*: s) a = (n+1 :*: s+a)
(a :*: b) = foldl' k (0 :*: 0) arr :: (Int :*: Double)

main = printf %f\n . mean $ [1 .. 1e9]

$ time ./A 
5.067109
./A  66.08s user 1.53s system 99% cpu 1:07.61 total

Note the use of strict pairs. Key to ensuring  the accumulators end up in
registers.The performance difference here is due to fold (and all left
folds) not fusing in normal build/foldr fusion.

The vector version runs about the same speed as unoptimsed C.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Copy on read

2008-05-13 Thread Matthew Naylor
Hi Andrew,

my probably dodgy reason for mentioning deforestation is that sharing
of intermediate values is a major stumbling block; code that uses data
linearly is possibly well suited for deforesting.  See Frankau's SASL
for a language that deforests all lists simply by not letting you copy
them!  (IIRC there is another constraint that forbids accumulating
parameters too.)

 Similarly, there are recursion patterns for which fusion isn't very 
 easy.

Yep, I suspect you're right.

 That's why most array-based code is explicitly in-place.  wouldn't
 it be nice if it didn't have to be?

I agree.  As an aside, DiffArray looks quite nice:

  http://www.haskell.org/haskellwiki/Modern_array_libraries

``if a diff array is used in a single-threaded style, ..., a!i takes
O(1) time and a//d takes O(length d).''

Notice the use of seq in 2nd example to enforce a kind of
single-threaded behaviour.  Seems nasty!  I wonder if this could be
avoided by providing a (*!*) such that

  arr *!* i = seq x (arr, x)
where x = arr ! i

It returns a new array which the programmer should use if they want
single-threaded behaviour.

Matt.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Abhay Parvate
I don't know why, but perhaps beginners may expect too much from the
laziness, almost to the level of magic (me too, in the beginning!). In an
eager language, a function like

mean :: (Fractional a) = [a] - a

expects the *whole* list before it can calculate the mean, and the question
of the 'mean' function consuming memory does not arise. We look for other
methods of finding the mean of very long lists. We do not expect such a
function in C or Scheme to succeed when the number of numbers is more than
that can fit the memory. (It will not even be called; the list creation
itself will not succeed.) Lazy languages allow us to use the same
abstraction while allowing doing more. But it is not magic, it is plain
normal order evaluation. Just as every Scheme programmer or C programmer
must understand the consequences of the fact that the arguments to a
function will be evaluated first, a Haskell programmer must understand the
consequences of the fact that the arguments to a function will be evaluated
only when needed/forced. Perhaps an early emphasis on an understanding of
normal order evaluation is needed while learning Haskell in order to stop
expecting magic, especially when one comes prejudiced from eager languages.

Regards
Abhay
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 13, at 0:26, J C wrote:


On the other hand, if the experts can't help using malloc, unsafe*,
global mutables and IO, I'll be able to conclude that this is probably
what it takes to make Haskell run fast :-(


Very few of the shootout entries have been revisited since most of the  
improvements to list and stream fusion, etc. in GHC, if I can trust  
the amount of discussion of shootout entries I've seen on IRC.  Some  
of them are still vintage 6.4.2, which had very little in the way of  
compiler smarts so hand optimization was crucial.  6.8.2, on the other  
hand, does much better all by itself as long as you don't e.g. use  
lists in stupid ways.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 12, at 22:18, Jeff Polakow wrote:


Then, I immediately blow my stack if I try something like:

mean [1..10].

The culprit is actually sum which is defined in the base libraries  
as either a foldl or a direct recursion depending on a compiler  
flag. In either case, the code is not strict enough; just trying to  
compute:


 sum [1..1000]


There's also an insufficient-laziness issue with enumerations in at  
least some versions of the standard library, IIRC.  meaning that just  
saying [1..1000] can introduce a space leak that can lead to a  
stack blowout.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

J C wrote:

[EMAIL PROTECTED] wrote:


 Hello JC, I think you've set yourself a challenge there :) Welcome to
 Haskell programming. Taking a Shootout entry and playing with it is
 a great way to learn Haskell. The Shootout provides an example in your
 favourite previous language for comparison and a small well defined
 program with exact test results you can pit your wits against. Fame
 awaits you for a fast and beautiful entry. I'm still learning useful
 things from the Fasta benchmark. It's surprising how many interesting
 things you can discover in a small piece of code.


It may be fun, but I don't think it would be meaningful. My code will
be, most likely, slow, leaving some doubt as to whether it's slow
because of the limitations of the compiler or my inexperience.

On the other hand, if the experts can't help using malloc, unsafe*,
global mutables and IO, I'll be able to conclude that this is probably
what it takes to make Haskell run fast :-(


Don't tell the experts who wrote the current shootout entries, but the
playing field is tilted radically in favour of us beginners being able
to improve on their entries because of new versions of GHC and new
tools that have been developed since they wrote their entries.

GHC will now automatically perform many of the optimisations that used
to have to be done by hand. For example I was surprised to discover
the other day when working on Fasta that putting this plain and simple
version of splitAt

splitAt n (x : xs) = (x : xs', xs'')
where (xs', xs'') = splitAt (n-1) xs

in my program made it run more quickly than using the built-in version
of splitAt which I now know looks like (ug!) this

splitAt (I# n#) ls
  | n# # 0#= ([], ls)
  | otherwise   = splitAt# n# ls
where
splitAt# :: Int# - [a] - ([a], [a])
splitAt# 0# xs = ([], xs)
splitAt# _  [EMAIL PROTECTED]  = (xs, xs)
splitAt# m# (x:xs) = (x:xs', xs'')
  where
(xs', xs'') = splitAt# (m# -# 1#) xs

because I was compiling my splitAt with -O2 optimisation as opposed
to the built-in version being compiled with -O. The extra optimisations
in -O2 are a new feature of GHC (and -O2 is slower to compile which is
why the built-in version doesn't use it, but that doesn't matter for the
shootout). You may similarly find various elaborations in the shootout
entries that were historically necessary for speed or memory reasons,
but which can now be removed because GHC or new libraries do the work
for us.

Have a go and see what happens to the speed when you change things
to make N-body more readable. I would bet money on there being simple
tweaks which will make it simultaneously faster and more readable.


Richard.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Sterling Clover
Well, it would be meaningful for your own experience in learning Haskell.
Some time ago somebody took a shot at nbodies using pure immutable
structures, and as I recall, got within about 4x the performance of mutable
structures. In 6.6, an STArray based approach was maybe 2x slower, but by
now it may well be comparable, as Dons pointed out. In fact, lots of the
shootout entries could use an overhaul now that 6.8 is running on their
machines, as plenty of older, uglier, optimizations are probably preformed
as efficiently if not more so by the newer GHC, whose strictness analysis,
for example, is consistently and pleasantly surprising.

If you take a stab at some of these benchmarks, with some helpful guidance
from #haskell, you'll no doubt get a much better sense of how memory and
performance works in haskell, and which tweaks are just there for the last
2%. So even if you can't beat the current winners (and given the compiler
changes, you may well be able to) you'll still come out ahead in the
process.

As for the clean entry though, it no doubt relies heavily on uniqueness
typing and so is secretly mutating like crazy under the hood.

Cheers,
S.

On Tue, May 13, 2008 at 12:26 AM, J C [EMAIL PROTECTED] wrote:

 On Mon, May 12, 2008 at 4:38 AM, Richard Kelsall
 [EMAIL PROTECTED] wrote:

   Hello JC, I think you've set yourself a challenge there :) Welcome to
   Haskell programming. Taking a Shootout entry and playing with it is
   a great way to learn Haskell. The Shootout provides an example in your
   favourite previous language for comparison and a small well defined
   program with exact test results you can pit your wits against. Fame
   awaits you for a fast and beautiful entry. I'm still learning useful
   things from the Fasta benchmark. It's surprising how many interesting
   things you can discover in a small piece of code.

 It may be fun, but I don't think it would be meaningful. My code will
 be, most likely, slow, leaving some doubt as to whether it's slow
 because of the limitations of the compiler or my inexperience.

 On the other hand, if the experts can't help using malloc, unsafe*,
 global mutables and IO, I'll be able to conclude that this is probably
 what it takes to make Haskell run fast :-(
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Darrin Thompson
On Tue, May 13, 2008 at 2:20 AM, Don Stewart [EMAIL PROTECTED] wrote:
  Note the use of strict pairs. Key to ensuring  the accumulators end up in
  registers.The performance difference here is due to fold (and all left
  folds) not fusing in normal build/foldr fusion.

  The vector version runs about the same speed as unoptimsed C.


These tricks going into Real World Haskell? When you say someone
needs to get familiar with the STG paper it scares me (a beginner)
off a little, an I've been making an effort to approach the papers. I
could barely understand the Fusion one and getting familiar with
compiler internals sounds like something I'd not be ready for.
Probably if I really looked at ghc-core I'd be pleasantly surprised
but I'm totally biased against even looking. Gcc is hard to read, thus
ghc is also. So while you are right about all this when you say it, I
think your goal is to persuade. RWH has some of the best practical
prose I've read yet. Well done there. Hopefully chapter 26 will be
crammed full of this stuff?

--
Darrin
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Richard,

Tuesday, May 13, 2008, 6:10:54 PM, you wrote:


because I was compiling my splitAt with -O2 optimisation as opposed
to the built-in version being compiled with -O. The extra optimisations
in -O2 are a new feature of GHC (and -O2 is slower to compile which is
why the built-in version doesn't use it, but that doesn't matter for the
shootout).


-O2 is very old ghc feature and i think that ghc base library is
compiled with -O2 - it's too obvious idea



In July 2007 -O2 was documented in GHC as making no difference to
the speed of programs :

http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html

and from this thread

http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html

it appears to be currently unused for splitAt.

I guess -O2 has however been around for a long time.


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Bulat Ziganshin
Hello Richard,

Tuesday, May 13, 2008, 7:56:36 PM, you wrote:

 In July 2007 -O2 was documented in GHC as making no difference to
 the speed of programs :

 http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html

it's because ghc is 15 years old and its documentation may be not
updated as things changes :)

 and from this thread

 http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html

 it appears to be currently unused for splitAt.

i've read this thread. it was just assumption - don't believe it
before you have checked it


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Achim Schneider
Darrin Thompson [EMAIL PROTECTED] wrote:

 On Tue, May 13, 2008 at 2:20 AM, Don Stewart [EMAIL PROTECTED] wrote:
   Note the use of strict pairs. Key to ensuring  the accumulators
  end up in registers.The performance difference here is due to
  fold (and all left folds) not fusing in normal build/foldr fusion.
 
   The vector version runs about the same speed as unoptimsed C.
 
 
 These tricks going into Real World Haskell? When you say someone
 needs to get familiar with the STG paper it scares me (a beginner)
 off a little, an I've been making an effort to approach the papers. I
 could barely understand the Fusion one and getting familiar with
 compiler internals sounds like something I'd not be ready for.
 Probably if I really looked at ghc-core I'd be pleasantly surprised
 but I'm totally biased against even looking. Gcc is hard to read, thus
 ghc is also. So while you are right about all this when you say it, I
 think your goal is to persuade. RWH has some of the best practical
 prose I've read yet. Well done there. Hopefully chapter 26 will be
 crammed full of this stuff?
 
You know, sometimes I wish this would be the Eve forums, so that I
could just answer FAIL.

Anyway, the goal of the Haskell community is to prevent success at any
cost, so anything that is done to ease things for noobs that is not
purely meant to prevent anyone from asking questions will be warded off
by automatic defence systems of the big ivory tower, which are
reinforced faster than you can ever hope to understand any topic.


To get a bit more on-topic: I currently completely fail to implement a
layout rule in Parsec because I don't understand its inner workings,
and thus constantly mess up my state. Parsec's ease of usage is
deceiving as soon as you use more than combinators: Suddenly the
plumbing becomes important, and hackage is full of such things. Haskell
has potentially infinite learning curves, and each one of them
usually represents a wall. To make them crumble, you have to get used to
not understand anything until you understand everything.

-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] saner shootout programs

2008-05-13 Thread Richard Kelsall

Bulat Ziganshin wrote:

Hello Richard,


In July 2007 -O2 was documented in GHC as making no difference to
the speed of programs :



http://www.haskell.org/pipermail/haskell-cafe/2007-July/029118.html


it's because ghc is 15 years old and its documentation may be not
updated as things changes :)


and from this thread



http://www.haskell.org/pipermail/haskell-cafe/2008-April/042155.html



it appears to be currently unused for splitAt.


i've read this thread. it was just assumption - don't believe it
before you have checked it



Hello Bulat,

Yes it was just a plausible guess, but not contradicted by the experts.
And that was using the Windows version of GHC so other versions may
have better optimisation. I don't know how to check, maybe the experts
can illuminate the subject?


Richard.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Analog data acquisition

2008-05-13 Thread Tom Nielsen
Hello,

I would like to use a lazy, purely functional language to create an
experiement description (and execution!) language for cellular
neuroscience, i.e. electrical recordings and stimulation.
Unfortunately, this means I have to talk to a
Analog-to-Digital/Digital-to-Analog converter board, with a precision
of about 16 bit at a rate of 50 kHz. I currently have a National
Instruments M-series board, but would be happy to try another board.
I'm not looking for real-time control right now, but that might be of
interest in the future.

Has anyone looked at creating bindings to the NI-DAQmx drivers or the
open-source COMEDI project, or similar hardware? Would anyone be
interested in helping out with this driver binding?

Regards
Tom
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Instance of Eq on cyclic data structure?

2008-05-13 Thread Leonard Siebeneicher

Hi.

I am testing a winged edge data structure.


module Main where

type Point3 = (Float, Float, Float)

data Body= Body [Vertex] [Edge] [Face] deriving(Eq)
data Face= Face Edge deriving(Eq)
data Edge= Edge  (Vertex, Vertex)
(Face, Face)
(Edge, Edge)
(Edge, Edge) deriving(Eq)
data Vertex= Vertex Point3 Edge deriving(Eq)

...

--- Face, Edge, Vertex: Conversion

getVertsFromFace :: Face - [Vertex]
getVertsFromFace (Face startEdge) = process_wing_clockwise startEdge
where
  process_wing_clockwise (Edge
  (vp, _)
  _
  (_, eWing1)
  _)
  | startEdge == eWing1 =
  [vp]
  | otherwise =
  vp : (process_wing_clockwise eWing1)


--- Misc procedures
printPointList :: [Vertex] - IO ()
printPointList [] = return ()
printPointList ((Vertex p _ ):vs) = putStr (show p) 
printPointList vs

--- main
main =
do
  let q = createQuad (0,0,0) (0,1,0) (1,1,0) (1,0,0)
  let f = (\(Body _ _ (face:faces)) - face) q
  let verts = getVertsFromFace f
  printPointList verts



Here I get a error message.


(0.0,0.0,0.0)(0.0,1.0,0.0)(1.0,1.0,0.0)runhugs: Error occurred

ERROR - Control stack overflow

I think this come because the derived Eq-Instance == get into an 
infinite loop


Alternatively I tried following

--- We want to identify each Body, Face, Edge or Vertex by a number ID
type ID = Int  --- A special number for identifying, used for Eq

data Body = Body ID [Vertex] [Edge] [Face]
data Face = Face ID (Edge)
data Edge = Edge ID
(Vertex , Vertex)
(Face , Face)
(Edge , Edge)
(Edge , Edge)
data Vertex = Vertex ID Edge

--- Eq Instances
instance Eq Vertex) where
(Vertex i1 _) == (Vertex i2 _) = i1 == i2

instance Eq Face where
(Face i1 _ ) == (Face i2 _) = i1 == i2

instance Eq Edge where
(Edge i1 _ _ _ _) == (Edge i2 _ _ _ _) = i1 == i2


instance Eq (Body) where
(Body i1  _ _ _) == (Body i2  _ _ _) = i1 == i2

...

This way my code does not hang up. But I have to manage those ugly ID's.
Is there a better way to create instances of class Eq, without something 
like ID?


Thanks for reading.

Best regards, Leonard







 Begin 

module Main where

type Point3 = (Float, Float, Float)

data Body= Body [Vertex] [Edge] [Face] deriving(Eq)
data Face= Face Edge deriving(Eq)
data Edge= Edge (Vertex, Vertex)
(Face, Face)
(Edge, Edge)
(Edge, Edge) deriving(Eq)
data Vertex= Vertex Point3 Edge deriving(Eq)

{-
  implementing simple generative modelling
-}

--- elementar object generation
createQuad :: Point3 -
  Point3 -
  Point3 -
  Point3 -
  Body

createQuad p0 p1 p2 p3 =
let
{faceFront = Face edge0
;faceBack = Face edge2
;vert0 = Vertex p0 edge0
;edge0 = Edge
 (vert0, vert1)
 (faceFront, faceBack)
 (edge3, edge1)
 (edge1, edge3)
;vert1 = Vertex p1 edge1
;edge1 = Edge
 (vert1, vert2)
 (faceFront, faceBack)
 (edge0, edge2)
 (edge2, edge0)
;vert2 = Vertex p2 edge2
;edge2 = Edge
 (vert2, vert3)
 (faceFront, faceBack)
 (edge1, edge3)
 (edge3, edge1)
;vert3 = Vertex p3 edge3
;edge3 = Edge
 (vert3, vert0)
 (faceFront, faceBack)
 (edge2, edge0)
 (edge0, edge2)
}
in
  Body [vert0, vert1, vert2, vert3] [edge0, edge1, edge2, edge3] 
[faceFront, faceBack]


--- Face, Edge, Vertex: Conversion

getVertsFromFace :: Face - [Vertex]
getVertsFromFace (Face startEdge) = process_wing_clockwise startEdge
where
  process_wing_clockwise (Edge
  (vp, _)
  _
  (_, eWing1)
  _)
  | startEdge == eWing1 =
  [vp]
  | otherwise =
  vp : (process_wing_clockwise eWing1)


--- Misc procedures
printPointList :: [Vertex] - IO ()
printPointList [] = return ()
printPointList ((Vertex p _ ):vs) = putStr (show p) 
printPointList vs

--- main
main =
do
  let q = createQuad (0,0,0) (0,1,0) (1,1,0) (1,0,0)
  let f = (\(Body _ _ (face:faces)) - face) q
  let verts = getVertsFromFace f
  printPointList verts
-
 End  

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread Jules Bean

 {-# LANGUAGE ScopedTypeVariables #-}

Data.Dynamic gives a passable impression of adding support for
dynamically typed code and runtime typing to GHC, without changing
the basic statically typed, all types known at runtime nature of the
language.

Note that Data.Dynamic relies upon two things: it relies upon a
concrete representation of types, given by TypeRep, and a primitive
which has to be provided by the compiler to actually implement
fromDynamic. (In GHC it uses unsafeCoerce# which is already
available, but you could imagine providing other primitives).

In principle TypeReps could be derived by hand, although if you do so
you can break everything by providing invalid instances. In practice
we'd rather the compiler did it for us and guaranteed safety.

You can do all sorts of things with Dynamic, but the general pattern
is that data which has some fixed, known type, can be passed through
a chunk of code which doesn't know its type (wrapped in Dynamic) and
then eventually consumed by another piece of code which *does* know
the type, and can unwrap it. The consuming code has to know the type
to unwrap it, although it can 'guess' various alternatives if it
wants, and thus type safety is preserved.

One thing which you can't obviously do is write Read or Show instances
for Dynamic. So can we pass Dynamic data over the wire?  If not,
Dynamic is limited to the context of within a single program, and
can't be used over the network between cooperating programs, or in
file formats, etc.

You can try this:

 import Data.Typeable

 data SerialisedDynamic = SD TypeRep String deriving (Show)

 freeze :: (Show a, Typeable a) = a - SerialisedDynamic
 freeze x = SD (typeOf x) (show x)

 thaw :: forall a . (Read a, Typeable a) = SerialisedDynamic - Maybe a
 thaw (SD t s) = if typeOf (undefined :: a) == t then
Just (read s)
 else Nothing

This is close, and works as far as it goes. It is a limited
reimplementation of Dynamic which uses show/read instead of
unsafeCoerce#. As such it is pure haskell (but relies on Typeable
instances).

You can't finish it off because you can't derive a 'Read' instance for
SD, because there is no read instance for TypeRep. Off-hand I can't
think of any reason why there can't be a Read instance for TypeRep,
but it would be a bit tricky with the current TypeRep because of the
way its implemented, I think. You need to take care about globally
qualified types and might want to use package names like ghc does in
its linking phase, but those are definitely surmountable problems.

Having said all that, I'm not sure how useful this really is. Most of
the time you could use this, you could equally just pass around the
String and 'read' it once you get to the place where you want to use
the value. Practical over-the-wire protocols necessarily have some
kind of tagging mechanism, and all this adds is a global tag table
for Typeable types via TypeRep.

Jules
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] hsc2hs: expanding macros in the .hsc file

2008-05-13 Thread Olivier Boudry
Hi all,

Is it possible to expand macros defined in includes into the .hsc file? I'm
trying to call functions from a library written in C. The library can be
used with or without Unicode chars, depending on #define instructions.

The library has macros for all the standard functions used to work on
strings (malloc, strlen, strcpy, strtok, ...). So that in code using the
library one can always call the macro and don't care about the char type
(strlenU instead of wcslen or strlen). Macro expansion will define which
function will be used depending on the SAPwithUNICODE #define. I put a very
simplified example below. The original file contains much more conditions
(OS, Unicode or not, little or big endian, ...).

#define strlenU SAP_UNAME(strlen)

#ifdef SAPwithUNICODE
  #define SAP_UNAME(name)name##U16
#else
  #define SAP_UNAME(name)name
#endif

#if defined(WCHAR_is_2B)
  #define strlenU16   wcslen
#endif

I would like to be able to expand strlenU in my .hsc file and get the
correct function.

foreign import ccall static sapuc.h strlenU
  f_strlenU :: Ptr (#type SAP_UC) - IO (#type size_t)

I would like to be able to expand the strlenU macro to the real function
name (wcslen or strlen). Is there a way to do this?

Thanks,

Olivier.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread Bulat Ziganshin
Hello Jules,

Tuesday, May 13, 2008, 9:39:12 PM, you wrote:
 This is close, and works as far as it goes. It is a limited
 reimplementation of Dynamic which uses show/read instead of

there are gread/gshow funcs. don't know how these works, though :)


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Trying to avoid duplicate instances

2008-05-13 Thread Bulat Ziganshin
Hello Eric,

Tuesday, May 13, 2008, 10:16:48 PM, you wrote:

 -fallow-overlapping-instances doesn't convince GHC.  Is there a way
 around this other than manually writing out all the instances I want?

afaik, no. typeclasses are not the same as OOP classes. i suggest you
to look into http://haskell.org/haskellwiki/OOP_vs_type_classes and in
particular papers mentioned at the end - one describes implementation
of type classes and other emphasize differences between t.c. and
classes. these should shed the light :)


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analog data acquisition

2008-05-13 Thread Don Stewart
tanielsen:
 Hello,
 
 I would like to use a lazy, purely functional language to create an
 experiement description (and execution!) language for cellular
 neuroscience, i.e. electrical recordings and stimulation.
 Unfortunately, this means I have to talk to a
 Analog-to-Digital/Digital-to-Analog converter board, with a precision
 of about 16 bit at a rate of 50 kHz. I currently have a National
 Instruments M-series board, but would be happy to try another board.
 I'm not looking for real-time control right now, but that might be of
 interest in the future.
 
 Has anyone looked at creating bindings to the NI-DAQmx drivers or the
 open-source COMEDI project, or similar hardware? Would anyone be
 interested in helping out with this driver binding?

I'm assuming there are existing C libraries for this? So a Haskell
binding would just talk to these?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Endianess (was Re: GHC predictability)

2008-05-13 Thread Aaron Denney
On 2008-05-12, Andrew Coppin [EMAIL PROTECTED] wrote:
 (Stupid little-endian nonsense... mutter mutter...)

I used to be a big-endian advocate, on the principle that it doesn't
really matter, and it was standard network byte order.  Now I'm
convinced that little endian is the way to go, as bit number n should
have value 2^n, byte number n should have value 256^n, and so forth.

Yes, in human to human communication there is value in having the most
significant bit first.  Not really true for computer-to-computer
communication.

-- 
Aaron Denney
--

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Ketil Malde
Aaron Denney [EMAIL PROTECTED] writes:

 I used to be a big-endian advocate, on the principle that it doesn't
 really matter, and it was standard network byte order.  Now I'm
 convinced that little endian is the way to go

I guess it depends a lot on what you grew up with.  The names
(little/big endian) are incredibly apt.

The only argument I can come up with, is that big endian seems to make
more sense for 'od':

  % echo foobar  foo 
  % od -x foo
  000 6f66 626f 7261 000a
  007

Since this is little endian, the output corresponds to of bo ra
\0\n.

So I guess the argument is that for big-endian, the concatenation of
hex numbers is invariant with respect to word sizes?

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Paul Johnson

Jeff Polakow wrote:

[...] This can be easily fixed by defining a suitable strict sum:

sum' = foldl' (+) 0

and now sum' has constant space. We could try to redefine mean using 
sum':


mean1 xs = sum' xs / fromIntegral (length xs)

but this still gobbles up memory. The reason is that xs is used twice 
and cannot be discarded as it is generated. 
As an experiment I tried using pointfree to see if it would do 
something similar.


 $ pointfree \xs - foldl' (+) 0 xs / fromIntegral (length xs)
 ap ((/) . foldl' (+) 0) (fromIntegral . length)

But when I try this in GHCi 6.8.2 I get:

 Prelude Data.List Control.Monad let mean2 = ap ((/) . foldl' (+) 0) 
(fromIntegral . length)


 interactive:1:12:
No instance for (Monad ((-) [b]))
   arising from a use of `ap' at interactive:1:12-58
 Possible fix: add an instance declaration for (Monad ((-) [b]))
In the expression: ap ((/) . foldl' (+) 0) (fromIntegral . length)
In the definition of `mean2':
mean2 = ap ((/) . foldl' (+) 0) (fromIntegral . length)


Any ideas?  Would the auto-generated pointfree version be any better if 
it could be made to work?


Paul.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analog data acquisition

2008-05-13 Thread Tom Nielsen
Yes. I guess I have to wait for chapter 19, then?

Tom

On Tue, May 13, 2008 at 7:35 PM, Don Stewart [EMAIL PROTECTED] wrote:
 tanielsen:


  Hello,
  
   I would like to use a lazy, purely functional language to create an
   experiement description (and execution!) language for cellular
   neuroscience, i.e. electrical recordings and stimulation.
   Unfortunately, this means I have to talk to a
   Analog-to-Digital/Digital-to-Analog converter board, with a precision
   of about 16 bit at a rate of 50 kHz. I currently have a National
   Instruments M-series board, but would be happy to try another board.
   I'm not looking for real-time control right now, but that might be of
   interest in the future.
  
   Has anyone looked at creating bindings to the NI-DAQmx drivers or the
   open-source COMEDI project, or similar hardware? Would anyone be
   interested in helping out with this driver binding?

  I'm assuming there are existing C libraries for this? So a Haskell
  binding would just talk to these?

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Neil Mitchell
Hi

  Anyway, the log file that i'm parsing uses English grammar, and the
 performance really dropped just by upgrading to Parsec3. I was hoping to use
 the ByteString support to boost the speed of already slow code, but had no
 such luck. It basicly went from Ugh, this is kinda slow to U i'm
 gonna go grab a burger and let this melt my CPU haha.

I think it is known that Parsec 3 is slower than Parsec 2, as a result
of the increased generality. I know that in the past someone was
working on it, but I am not sure if they ever got anywhere.

Thanks

Neil
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread Alfonso Acosta
On Tue, May 13, 2008 at 7:39 PM, Jules Bean [EMAIL PROTECTED] wrote:
  One thing which you can't obviously do is write Read or Show instances
  for Dynamic. So can we pass Dynamic data over the wire?  If not,
  Dynamic is limited to the context of within a single program, and
  can't be used over the network between cooperating programs, or in
  file formats, etc.

I've never used hs-plugins, but if I recall properly, it includes its
own implementation of TypeRep (and consequently Dynamic) in order to
overcome the serialization problem you have mentioned.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Ketil Malde
Neil Mitchell [EMAIL PROTECTED] writes:

 I think it is known that Parsec 3 is slower than Parsec 2, as a result
 of the increased generality. I know that in the past someone was
 working on it, but I am not sure if they ever got anywhere.

I got pretty good performance (IMHO - about 10MB/s, still CPU-bound)
using a lazy bytestring tokenizer and Parsec on top of that.  Of
course, it probably depends on the complexity of the parsing...

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Jed Brown
On Tue 2008-05-13 20:46, Ketil Malde wrote:
 Aaron Denney [EMAIL PROTECTED] writes:
 
 I guess it depends a lot on what you grew up with.  The names
 (little/big endian) are incredibly apt.
 
 The only argument I can come up with, is that big endian seems to make
 more sense for 'od':
 
   % echo foobar  foo 
   % od -x foo
   000 6f66 626f 7261 000a
   007

This, of course, is because `od -x' regards the input as 16-bit integers.  We
can get saner output if we regard it is 8-bit integers.

  $ od -t x1 foo
  000 66 6f 6f 62 61 72 0a
  007

  Now I'm convinced that little endian is the way to go, as bit number n
  should have value 2^n, byte number n should have value 256^n, and so forth.

It's not that simple with bits.  They lack consistency just like the usual US
date format and the way Germans read numbers.

Jed


pgphk5bR3rQBd.pgp
Description: PGP signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Ketil Malde
Jed Brown [EMAIL PROTECTED] writes:

 This, of course, is because `od -x' regards the input as 16-bit integers.  We
 can get saner output if we regard it is 8-bit integers.

Yes, of course. The point was that for big-endian, the word size
won't matter.  Little-endian words will be reversed with respect to
the normal (left-to-right, most significant first) way we print
numbers.

-k
-- 
If I haven't seen further, it is by standing in the footprints of giants
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Bryan O'Sullivan
Darrin Thompson wrote:

 These tricks going into Real World Haskell?

Some will, yes.

For example, the natural and naive way to write Andrew's mean function
doesn't involve tuples at all: simply tail recurse with two accumulator
parameters, and compute the mean at the end.  GHC's strictness analyser
does the right thing with this, so there's no need for seq, $!, or the
like.  It's about 3 lines of code.

b
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Endianess

2008-05-13 Thread Daniel Fischer
Am Dienstag, 13. Mai 2008 21:28 schrieb Aaron Denney:
 On 2008-05-13, Ketil Malde [EMAIL PROTECTED] wrote:
  Jed Brown [EMAIL PROTECTED] writes:
  This, of course, is because `od -x' regards the input as 16-bit
  integers.  We can get saner output if we regard it is 8-bit integers.
 
  Yes, of course. The point was that for big-endian, the word size
  won't matter.  Little-endian words will be reversed with respect to
  the normal (left-to-right, most significant first) way we print
  numbers.

 Right.  Because we print numbers backwards.

Try hebrew or arab then, they have the least significant digit first in 
reading order :)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Jeff Polakow
Hello,

 For example, the natural and naive way to write Andrew's mean function
 doesn't involve tuples at all: simply tail recurse with two accumulator
 parameters, and compute the mean at the end.  GHC's strictness analyser
 does the right thing with this, so there's no need for seq, $!, or the
 like.  It's about 3 lines of code.
 
Is this the code you mean?

meanNat = go 0 0 where
go s n [] = s / n
go s n (x:xs) = go (s+x) (n+1) xs

If so, bang patterns are still required bang patterns in ghc-6.8.2 to run 
in constant memory:

meanNat = go 0 0 where
go s n [] = s / n
go !s !n (x:xs) = go (s+x) (n+1) xs

Is there some other way to write it so that ghc will essentially insert 
the bangs for me?

-Jeff



---

This e-mail may contain confidential and/or privileged information. If you 
are not the intended recipient (or have received this e-mail in error) 
please notify the sender immediately and destroy this e-mail. Any 
unauthorized copying, disclosure or distribution of the material in this 
e-mail is strictly forbidden.___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Endianess

2008-05-13 Thread Achim Schneider
Jed Brown [EMAIL PROTECTED] wrote:

 It's not that simple with bits.  They lack consistency just like the
 usual US date format and the way Germans read numbers.
 
So you claim that you pronounce 14 tenty-four? In German pronunciation
is completely uniform from 13 to 99.


-- 
(c) this sig last receiving data processing entity. Inspect headers for
past copyright information. All rights reserved. Unauthorised copying,
hiring, renting, public performance and/or broadcasting of this
signature prohibited. 

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Dan Doel
On Tuesday 13 May 2008, Jeff Polakow wrote:
 Is this the code you mean?

 meanNat = go 0 0 where
 go s n [] = s / n
 go s n (x:xs) = go (s+x) (n+1) xs

 If so, bang patterns are still required bang patterns in ghc-6.8.2 to run
 in constant memory:

 meanNat = go 0 0 where
 go s n [] = s / n
 go !s !n (x:xs) = go (s+x) (n+1) xs

 Is there some other way to write it so that ghc will essentially insert
 the bangs for me?

It works fine here when compiled with -O or better.

Perhaps that should be a tip in the book? Make sure you're compiling with 
optimizations. :)

-- Dan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Andrew Coppin

Don Stewart wrote:

Andrew, would you say you understand the original problem of why

mean xs = sum xs / fromIntegral (length xs)

was a bad idea now? Or why the left folds were a better solution?
  


That definition of mean is wrong because it traverses the list twice. 
(Curiosity: would traversing it twice in parallel work any better?) As 
for the folds - I always *always* mix up left and right folds. Every 
single damn time I want a fold I have to look it up to see which one I 
want. I had a similar problem with learning to drive, by the way... the 
consequences there are of course much more serious than just crashing 
your _computer_...


It was probably a poor example. The point I was attempting to make is 
that in Haskell, very subtle little things can have an unexpectedly 
profound effect. If you don't know what you're supposed to be looking 
for, it can be really hard to see why your program is performing badly.


For what it's worth, I think I *do* currently have a reasonably gasp of 
how lazzy evaluation works, normal order reduction, graph machines, and 
so on. And yet, I still have trouble making my code go fast sometimes. 
As I said in another post, if I can track down some *specific* programs 
I've written and had problems with, maybe we can have a more meaningful 
debate about it.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Andrew Coppin

Aaron Denney wrote:

On 2008-05-12, Andrew Coppin [EMAIL PROTECTED] wrote:
  

(Stupid little-endian nonsense... mutter mutter...)



I used to be a big-endian advocate, on the principle that it doesn't
really matter, and it was standard network byte order.  Now I'm
convinced that little endian is the way to go, as bit number n should
have value 2^n, byte number n should have value 256^n, and so forth.

Yes, in human to human communication there is value in having the most
significant bit first.  Not really true for computer-to-computer
communication.
  


It just annoys me that the number 0x12345678 has to be transmuted into 
0x78563412 just because Intel says so. Why make everything so complicated?


[Oh GOD I hope I didn't just start a Holy War...]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess (was Re: GHC predictability)

2008-05-13 Thread Lennart Augustsson
Also, the way we write numbers is little endian when writing in
Arabic; we just forgot to reverse the digits when we borrowed the
notation.

Little endian is more logical unless you also number your bits with
MSB as bit 0.

On Tue, May 13, 2008 at 7:35 PM, Aaron Denney [EMAIL PROTECTED] wrote:
 On 2008-05-12, Andrew Coppin [EMAIL PROTECTED] wrote:
 (Stupid little-endian nonsense... mutter mutter...)

 I used to be a big-endian advocate, on the principle that it doesn't
 really matter, and it was standard network byte order.  Now I'm
 convinced that little endian is the way to go, as bit number n should
 have value 2^n, byte number n should have value 256^n, and so forth.

 Yes, in human to human communication there is value in having the most
 significant bit first.  Not really true for computer-to-computer
 communication.

 --
 Aaron Denney
 --

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] HSFFIG compilation problem

2008-05-13 Thread Harri Kiiskinen
Hello,

I've tried to compile hsffig-1.0pl2, unsuccesfully, also from the
unstable darcs repo. I'm using ghc-6.8 (ghc6-6.8.2 from Debian
unstable). HSFFIG comes with its own version of Cabal, but I cannot get
past the configuration phase (./cabal-setup configure), when I get the
following output:

Configuring HSFFIG-1.0...
configure: searching for ghc in path.
configure: found ghc at /usr/bin/ghc
configure: looking for package tool: ghc-pkg near compiler
in /usr/bin/ghc
configure: found package tool in /usr/bin/ghc-pkg
configure: Using install prefix: /usr/local
configure: Using compiler: /usr/bin/ghc
configure: Compiler flavor: GHC
configure: Compiler version: 6.8.2
configure: Using package tool: /usr/bin/ghc-pkg
configure: Using haddock: /usr/bin/haddock
configure: Using happy: /usr/bin/happy
configure: Using alex: /usr/bin/alex
configure: Using hsc2hs: /usr/bin/hsc2hs
configure: Using cpphs: /usr/bin/cpphs
configure: Reading installed packages...
configure: Dependency base-any: using base-3.0.1.0
cannot satisfy dependency text-any

Does anyone have an idea, how the dependency 'text-any' could be
satisfied?

Harri K.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Maybe a, The Rationale

2008-05-13 Thread PR Stanley



Paul:   What is the underlying rationale for the Maybe data type?

It is the equivalent of a database field that can be NULL.


Paul: shock, horror! the null value or the absence of any 
value denoted by null is not really in harmony with the relational model.




 is it the safe style of programming it encourages/

Yes.  Consider C, where this is typically done with a NULL pointer, or
Lisp, where you use the empty list, nil.  These are perfectly good
values in themselves, while Nothing is just Nothing, if you pardon the
pun.

-k
--
If I haven't seen further, it is by standing in the footprints of giants


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Don Stewart
ndmitchell:
 Hi
 
   1. What is ghc-core?
 
 You actually answer this question as part of question 2. Think of it
 as simple Haskell with some additional bits.
 
   2. Does anybody know how to actually read GHC's Core output anyway?
  To me,
  it looks almost exactly like very, very complicated Haskell source with a
  suspicious concentration of case expressions - but I understand that in the
  Core language, many constructs actually mean something quite different.
 
 There is one different from standard Haskell I am aware of. In Core,
 case x of _ - 1 will evaluate x, in Haskell it won't. Other than
 that, its just Haskell, but without pattern matching and only cases -
 hence the rather high number of cases.

Well, 'let's too, which bind either unlifted or lifted values.

If they're binding lifted things, that's a thunk being allocated.

-- Don
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Analog data acquisition

2008-05-13 Thread Derek Elkins
On Tue, 2008-05-13 at 19:48 +0100, Tom Nielsen wrote:
 Yes. I guess I have to wait for chapter 19, then?

Just read the FFI Addendum:
http://www.cse.unsw.edu.au/~chak/haskell/ffi/

It's not complicated at all.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] GHC predictability

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 13, at 17:01, Andrew Coppin wrote:

That definition of mean is wrong because it traverses the list  
twice. (Curiosity: would traversing it twice in parallel work any  
better?) As for the folds - I always *always* mix up


It might work better but you're still wasting a core that could be  
put to better use doing something more sensible.  It's pretty much  
always best to do all the calculations that require traversing a given  
list in a single traversal.


--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Endianess

2008-05-13 Thread Brandon S. Allbery KF8NH


On 2008 May 13, at 17:12, Andrew Coppin wrote:


[Oh GOD I hope I didn't just start a Holy War...]



Er, I'd say it's already well in progress.  :/

--
brandon s. allbery [solaris,freebsd,perl,pugs,haskell] [EMAIL PROTECTED]
system administrator [openafs,heimdal,too many hats] [EMAIL PROTECTED]
electrical and computer engineering, carnegie mellon universityKF8NH


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Data.Dynamic over the wire

2008-05-13 Thread John Meacham
I use a trick like this to allow saving of dynamics into ho files for
jhc, the same thing will work for network connections.

see Info.Info for the data type, and Info.Binary for the binary
serialization routines.

http://repetae.net/dw/darcsweb.cgi?r=jhc;a=tree;f=/Info

John

-- 
John Meacham - ⑆repetae.net⑆john⑈
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] HSFFIG compilation problem

2008-05-13 Thread Gwern Branwen
On 2008.05.13 23:31:17 +0200, Harri Kiiskinen [EMAIL PROTECTED] scribbled 
1.1K characters:
 Hello,

 I've tried to compile hsffig-1.0pl2, unsuccesfully, also from the
 unstable darcs repo. I'm using ghc-6.8 (ghc6-6.8.2 from Debian
 unstable). HSFFIG comes with its own version of Cabal, but I cannot get
 past the configuration phase (./cabal-setup configure), when I get the
 following output:

 Configuring HSFFIG-1.0...
 configure: searching for ghc in path.
 configure: found ghc at /usr/bin/ghc
 configure: looking for package tool: ghc-pkg near compiler
 in /usr/bin/ghc
 configure: found package tool in /usr/bin/ghc-pkg
 configure: Using install prefix: /usr/local
 configure: Using compiler: /usr/bin/ghc
 configure: Compiler flavor: GHC
 configure: Compiler version: 6.8.2
 configure: Using package tool: /usr/bin/ghc-pkg
 configure: Using haddock: /usr/bin/haddock
 configure: Using happy: /usr/bin/happy
 configure: Using alex: /usr/bin/alex
 configure: Using hsc2hs: /usr/bin/hsc2hs
 configure: Using cpphs: /usr/bin/cpphs
 configure: Reading installed packages...
 configure: Dependency base-any: using base-3.0.1.0
 cannot satisfy dependency text-any

 Does anyone have an idea, how the dependency 'text-any' could be
 satisfied?

 Harri K.

hsffig is old enough you should probably be using the latest, from Darcs 
(although the last patch is still from February 2006): 
http://www.golubovsky.org/repos/hsffig-1.1. I got a little further by running 
make and then modifying the build-depends:

+Build-depends:  base3, parsec, hxt, Cabal=1.1.3, filepath, unix, process, 
containers, array, directory

Which leads to a bunch of build errors:

[ 5 of 14] Compiling TokenOps ( programs/TokenOps.hs, 
dist/build/ffipkg/ffipkg-tmp/TokenOps.o )

programs/TokenOps.hs:48:11:
No instance for (Text.Parsec.Prim.Stream
   s mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:48:11-45
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TCOMM_OPEN undefined) tf
In the definition of `commOpen':
commOpen = tokenTest (TCOMM_OPEN undefined) tf
 where
 tf (TCOMM_OPEN _) (TCOMM_OPEN _) = True
 tf _ _ = False

programs/TokenOps.hs:52:12:
No instance for (Text.Parsec.Prim.Stream
   s1 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:52:12-47
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s1 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TCOMM_CLOSE undefined) tf
In the definition of `commClose':
commClose = tokenTest (TCOMM_CLOSE undefined) tf
  where
  tf (TCOMM_CLOSE _) (TCOMM_CLOSE _) = True
  tf _ _ = False

programs/TokenOps.hs:58:10:
No instance for (Text.Parsec.Prim.Stream
   s2 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:58:10-50
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s2 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TKFILE undefined undefined) tf
In the definition of `anyFile':
anyFile = tokenTest (TKFILE undefined undefined) tf
where
tf (TKFILE _ _) (TKFILE _ _) = True
tf _ _ = False

programs/TokenOps.hs:64:12:
No instance for (Text.Parsec.Prim.Stream
   s3 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:64:12-54
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s3 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TKSTRING undefined undefined) tf
In the definition of `anyString':
anyString = tokenTest (TKSTRING undefined undefined) tf
  where
  tf (TKSTRING _ _) (TKSTRING _ _) = True
  tf _ _ = False

programs/TokenOps.hs:70:9:
No instance for (Text.Parsec.Prim.Stream
   s4 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
  arising from a use of `tokenTest' at programs/TokenOps.hs:70:9-48
Possible fix:
  add an instance declaration for
  (Text.Parsec.Prim.Stream
 s4 mtl-1.1.0.0:Control.Monad.Identity.Identity Token)
In the expression: tokenTest (TKDEF undefined undefined) tf
In the definition of `anyDef':
anyDef = tokenTest (TKDEF undefined undefined) tf
   where
   tf (TKDEF _ _) (TKDEF _ _) = True
   tf 

Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Albert Y. C. Lai

Andrew Coppin wrote:
2. Does anybody know how to actually read GHC's Core output anyway? To 
me, it looks almost exactly like very, very complicated Haskell source 
with a suspicious concentration of case expressions - but I understand 
that in the Core language, many constructs actually mean something quite 
different.


I can read most of GHC Core. I was never taught, and I never asked. I 
did not read GHC's source code. I just guessed.


I agree with your assessment about the domination by case expressions. 
They spell out the evaluation order. You need truckloads of them.


I slightly agree with your assessment about complicated, but I call it 
detailed and tedious. This is an assembly language for functional 
programming. If it weren't detailed and tedious, it would have no right 
to exist.

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Commutative monads vs Applicative functors

2008-05-13 Thread Ronald Guida
I have a few questions about commutative monads and applicative functors.

From what I have read about applicative functors, they are weaker than
monads because with a monad, I can use the results of a computation to
select between alternative future computations and their side effects,
whereas with an applicative functor, I can only select between the
results of computations, while the structure of those computations and
their side effects are fixed in advance.

But then there are commutative monads.  I'm not exactly sure what a
commutative monad is, but my understanding is that in a commutative
monad the order of side effects does not matter.

This leads me to wonder, are commutative monads still stronger than
applicative functors, or are they equivalent?

And by the way, what exactly is a commutative monad?
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Neal Alexander

Philippa Cowderoy wrote:

On Tue, May 13, 2008 5:53 am, Neal Alexander wrote:

I can post the full profiling info if anyone really cares.



Any info is helpful. It's taking a while to get round to things, but the
more relevant info we have to hand when we do the easier it is to improve
things and the less begging for data we have to do!


I stripped the code down to just the parsec related stuff and retested it.

http://72.167.145.184:8000/parsec_test/Parsec2.prof
http://72.167.145.184:8000/parsec_test/Parsec3.prof

And the parser with a 9mb (800 kb gziped) sample log file:
http://72.167.145.184:8000/parsec_test.tar.gz

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Antoine Latter
On Tue, May 13, 2008 at 9:23 PM, Neal Alexander [EMAIL PROTECTED] wrote:
  I stripped the code down to just the parsec related stuff and retested it.

  http://72.167.145.184:8000/parsec_test/Parsec2.prof
  http://72.167.145.184:8000/parsec_test/Parsec3.prof

  And the parser with a 9mb (800 kb gziped) sample log file:
  http://72.167.145.184:8000/parsec_test.tar.gz


So I've been picking at some ways to speed up Parsec 3.  I haven't had
much success at this benchmark, but one thing confused me:

In my hacked-up version, when I change the monadic type from a data
declaration to a newtype declaration, I get a significant slowdown.
In the program posted by Neal, I go from ~43 s with data to about 1
minute with a newtype.

Is this expected?  I don't really understand why adding an extra layer
of indirection should speed things up.

-Antoine
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re[2]: [Haskell-cafe] Re: Parsec3 performance issues (in relation to v2)

2008-05-13 Thread Bulat Ziganshin
Hello Antoine,

Wednesday, May 14, 2008, 8:43:47 AM, you wrote:

 Is this expected?  I don't really understand why adding an extra layer
 of indirection should speed things up.

adding laziness may improve performance by avoiding calculation of
unnecessary stuff or moving into into later stage when it will be
immediately consumed


-- 
Best regards,
 Bulatmailto:[EMAIL PROTECTED]

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: GHC predictability

2008-05-13 Thread Richard A. O'Keefe


On 14 May 2008, at 8:58 am, Andrew Coppin wrote:
What I'm trying to say [and saying very badly] is that Haskell is an  
almost terrifyingly subtle language.


Name me a useful programming language that isn't.
Simply interchanging two for-loops, from
for (i = 0; i  N; i++) for (j = 0; j  N; j++)
to  for (j = 0; j  N; j++) for (i = 0; i  N; i++)
when marching over an array, can easily slow you down
by nearly two orders of magnitude in C.
[Hint: read What every computer scientist needs to know
about memory.]  For a real shock, take a look at what
your C++ templates are doing...

There's one big difference between Haskell and language T (my other
preferred language).  Seemingly insignificant changes in Haskell can
kill performance, but seemingly insignificant changes in language T
can take you into territory the library designers never thought of
where there are lions, tigers, and bears in abundance.  Unexpectedly
slow is better than inexplicably buggy.


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Short circuiting and the Maybe monad

2008-05-13 Thread Janis Voigtlaender

Graham Fawcett wrote:

Yes, but that's still a 'quick' short-circuiting. In your example, if
'n' is Nothing, then the 'f = g = h' thunks will not be forced
(thanks to lazy evaluation), regardless of associativity. Tracing
verifies this:


No, it doesn't. What your tracing verifies is that the f, g, and h will
not be evaluated. It doesn't verify that the 'f = g = h' part of the
expression causes no evaluation overhead. Because that is not true.
Consider the following:


import Debug.Trace



data Maybe' a = Nothing' | Just' a  deriving Show



instance Monad Maybe' where
return = Just'
Nothing' = _ = trace match Nothing'
Just' a  = k = k a



talk s = Just' . (trace s)
f = talk --f
g = talk --g
h = talk --h



foo n = n = f = g = h


Now:

*Main foo Nothing'
match
match
match
Nothing'

So you get three pattern-matches on Nothing', where with the associative
variant


foo' n = n = \a - f a = \b - g b = h


you get only one:

*Main foo' Nothing'
match
Nothing'

For a way to obtain such improvements automatically, and without
touching the code, you may want to look into

http://wwwtcs.inf.tu-dresden.de/~voigt/mpc08.pdf

Ciao, Janis.

--
Dr. Janis Voigtlaender
http://wwwtcs.inf.tu-dresden.de/~voigt/
mailto:[EMAIL PROTECTED]
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe