Re: [Haskell-cafe] Vancouver Haskell users meeting

2008-06-08 Thread Ryan Dickie
Same deal but i'm in Ottawa for the summer. I'll be back around september.

--ryan

2008/6/6 Asumu Takikawa [EMAIL PROTECTED]:
 Hi. I'd be interested in a meeting like this, but unfortunately since
 UBC is done for winter term I'm out of Canada for the summer. If anyone
 organizes a meet-up come fall I'd happily attend.

 Cheers,
 AT

 On 12:48 Mon 02 Jun , Jon Strait wrote:
Anyone else here from Vancouver (Canada)?  I thought it would be great
to have a little informal get-together at a local cafe and share how
we're currently using Haskell, or really anything (problems,
comparisons, useful software tools, etc.) in relation to Haskell.
I'm scheduling a meeting for this Thursday, June 5th. for 7PM at
[1]Waazubee Cafe.  (At Commercial Dr. and 1st Ave.)
They have wireless internet access.  I'll get a table near the back,
bring my laptop, and will have a copy of Hudak's SOE book (the front
cover is impossible to miss) out on the table.
If anyone wants to meet, but this Thursday is not a good day for you,
let me know what days are better and we'll move the meeting.  If anyone
is sure that they will come this Thursday, you might let me know, so I
can have an idea about the resistance in changing the day, if needed.
Thanks,
Jon

 References

1. http://www.waazubee.com/content/directions.php

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)

 iD8DBQFISL4aPVZMXBlgx7ERAv/OAJwP/1bfduqEa6bTBEaOV3420puRKACfU+Pa
 sZtx9R39ZlrrjUp8/zMlNhk=
 =+LbA
 -END PGP SIGNATURE-

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Why functional programming matters

2008-01-23 Thread Ryan Dickie
On Jan 23, 2008 5:29 AM, Simon Peyton-Jones [EMAIL PROTECTED] wrote:

 Friends

 Over the next few months I'm giving two or three talks to groups of *non*
 functional programmers about why functional programming is interesting and
 important.  If you like, it's the same general goal as John Hughes's famous
 paper Why functional programming matters.

 Audience: some are technical managers, some are professional programmers;
 but my base assumption is that none already know anything much about
 functional programming.

 Now, I can easily rant on about the glories of functional programming, but
 I'm a biased witness -- I've been doing this stuff too long.  So this
 message is ask your help, especially if you are someone who has a
 somewhat-recent recollection of realising wow, this fp stuff is so
 cool/useful/powerful/etc.

 I'm going to say some general things, of course, about purity and effects,
 modularity, types, testing, reasoning, parallelism and so on. But I hate
 general waffle, so I want to give concrete evidence, and that is what I
 particularly want your help with.  I'm thinking of two sorts of evidence:


 1. Small examples of actual code. The goal here is (a) to convey a
 visceral idea of what functional programming *is*, rather than just assume
 the audience knows (they don't), and (b) to convey an idea of why it might
 be good.  One of my favourite examples is quicksort, for reasons explained
 here:
 http://haskell.org/haskellwiki/Introduction#What.27s_good_about_functional_programming.3F

 But I'm sure that you each have a personal favourite or two. Would you
 like to send them to me, along with a paragraph or two about why you found
 it compelling?  For this purpose, a dozen lines of code or so is probably a
 maximum.


 2. War stories from real life.  eg In company X in 2004 they rewrote
 their application in Haskell/Caml with result Y.  Again, for my purpose I
 can't tell very long stories; but your message can give a bit more detail
 than one might actually give in a presentation.  The more concrete and
 specific, the better.   E.g. what, exactly, about using a functional
 language made it a win for you?


 If you just reply to me, with evidence of either kind, I'll glue it
 together (regardless of whether I find I can use it in my talks), and put
 the result on a Wiki page somewhere.  In both cases pointers to blog entries
 are fine.

 Quite a lot of this is FP-ish rather than Haskell-ish, but I'm consulting
 the Haskell mailing lists first because I think you'll give me plenty to go
 on; and because at least one of the talks *is* Haskell-specific.  However,
 feel free to reply in F# or Caml if that's easier for you.

 Thanks!

 Simon
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


I'm still just learning haskell but maybe as a n00b I can give you some
insight into what I think is important.

I will take a guess here and say most of your audience is from the
object-oriented crowd. Their software engineering practices are probably
entirely based upon the idea of wrapping state up in objects and passing
them around. They're probably going to want ways to leverage these
techniques without dropping everything.

I personally think it is neat that non-functional languages are starting to
borrow many ideas from functional languages. C# has lambda and LINQ, java
might be adding closures. Scala is functional but has access to all the
goodies of the java library. Python has list comprehensions. Even c++ is
going to be adding lambda expressions (which are really handy for the stl
algos which are functional like themselves).

Error handling and QA are very important in the real world. It might not
hurt to show a few simple quick check examples and cases where errors are
caught at compile time. There are probably many examples in ghc development.


--
Ryan Dickie
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] ANN / CFP - LLVM bindings for Haskell

2008-01-03 Thread Ryan Dickie
On Jan 3, 2008 3:43 AM, Bryan O'Sullivan [EMAIL PROTECTED] wrote:

 This is an early release of Haskell bindings for the popular LLVM
 compiler infrastructure project.

 If you don't know what LLVM is, it's a wonderful toybox of compiler
 components, from a complete toolchain supporting multiple architectures
 through a set of well-defined APIs and IR formats that are designed for
 building interesting software with.

 The official LLVM home page is here:

  http://llvm.org/

 The Haskell bindings are based on Gordon Henriksen's C bindings.  The C
 bindings are almost untyped, but the Haskell bindings re-add type safety
 to prevent runtime crashes and general badness.

 Currently, the entire code generation system is implemented, with most
 LLVM data types supported (notably absent are structs).  Also plugged in
 is JIT support, so you can generate code at runtime from Haskell and run
 it immediately.  I've attached an example.

 Please join in the hacking fun!

  darcs get http://darcs.serpentine.com/llvm

 If you want a source tarball, fetch it from here:

  http://darcs.serpentine.com/llvm/llvm-0.0.2.tar.gz

 (Hackage can't host code that uses GHC 6.8.2's language extension names
 yet.)

 There's very light documentation at present, but it ought to be enough
 to get you going.

b

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


Maybe I am asking an uninformed n00b question but how come GHC has fvia-C
and are also working on an asm backend. Is there any reason why they could
not build off the work of LLVM (which supports various architectures) then
ditch those two backends and call it a day?

--ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] nbody (my own attempt) and performance problems

2007-11-28 Thread Ryan Dickie
On Nov 28, 2007 11:18 AM, Dan Weston [EMAIL PROTECTED] wrote:

 Just out of curiosity...

  --some getter functions
  pVel !(_,vel,_) = vel
  pPos !(pos,_,_) = pos
  pMass !(!_,!_,!mass) = mass

 What does the !(...) buy you? I thought tuples were already strict by
 default in patterns (you'd need ~(...) to make them irrefutable), so
 isn't the above equivalent to:

 --some getter functions
 pVel  (_,vel,_) = vel
 pPos  (pos,_,_) = pos
 pMass (!_,!_,!mass) = mass


Yes you are right. I did not need those extra ! in front of the tuples.



 And why in any case are the tuple components for pMass strict but for
 pVel and pPos non-strict? Is is that mass is always used but position
 and velocity are not?


Without all three components of the tuple in pMass being !'d, I find a 2x
slowdown. This include trying pMass (_,_,!mass), pMass(!_,_,!mass), and all
other combinations.

Why that happens.. I do not know. pMass is only used where its argument (the
planet tuple) was defined strict like below. I would expect p1 to be fully
evaluated before pMass p1 is ever called.

offset_momentum (!p1,p2,p3,p4,p5) = ( pp1,p2,p3,p4,p5 ) where
pp1 = ( pPos p1,ppvel,pMass p1 )



 Ryan Dickie wrote:
  I sat down tonight and did tons of good learning (which was my goal).
  Yes, the variable names in the unrolling is a little ugly but it helps
  to read the C++ version for context. There are two for loops (advN is
  each inner one unrolled). the other function names match the C++
  version.  It was my goal to implement an unrolled version of that.
 
  Fortunately, my performance is excellent now. It is only 4x slower than
  the C++ version and 2x slower than the Haskell one listed (which uses
  pointer trickery). I am sure there could be more done but I am at my
  limit of comprehension. But if I may guess, I would say that any speed
  issues now are related to a lack of in place updating for variables and
  structures.
 
  I'd also like to thank everyone for their help so far. I have attached
  my latest version.
 
  --ryan
 
  On Nov 27, 2007 7:14 PM, Sterling Clover  [EMAIL PROTECTED]
  mailto:[EMAIL PROTECTED] wrote:
 
  The first step would be profiling -- i.e. compiling with -prof
 -auto-
  all to tag each function as a cost center, then running with +RTS -p
  to generate a cost profile. The problem here is you've got massive
  amounts of unrolling done already, so it's sort of hard to figure
 out
  what's doing  what, and the names you've given the unrolled
 functions
  are... less than helpful. (first rule of optimization: optimize
  later.)  The use of tuples shouldn't be a problem per se in terms of
  performance, but it really hurts readability to lack clear type
  signatures and types. You'd probably be better off constructing a
  vector data type as does the current Haskell entry -- and by forcing
  it to be strict and unboxed (you have nearly no strictness
  annotations I note -- and recall that $! only evaluates its argument
  to weak head normal form, which means that you're just checking if
  the top-level constructor is _|_) you'll probably get better
  performance to boot. In any case, declaring type aliases for the
  various units you're using would also help readability quite a bit.
 
  --S
 
  On Nov 27, 2007, at 5:41 PM, Ryan Dickie wrote:
 
I thought it would be a nice exercise (and a good learning
experience) to try and solve the nbody problem from the debian
language shootout. Unfortunately, my code sucks. There is a
 massive
space leak and performance is even worse. On the bright side, my
implementation is purely functional. My twist: I manually
 unrolled
a few loops from the C++ version.
   
I believe that most of my performance problems stem from my abuse
of tuple. The bodies are passed as a tuple of planets, a planet
 is
a tuple of (position, velocity, mass) and the vectors position
 and
velocity are also tuples of type double. My lame justification
 for
that is to make it nice and handy to pass data around.
   
Any tips would be greatly appreciated.
   
--ryan
nbody3.hs
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org mailto:Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe
 
 
 
  
 
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A tale of three shootout entries

2007-11-27 Thread Ryan Dickie
Oops forgot to hit reply-to-all.. resending..

N-body is looking good. I am running and amd64 3000+ on ghc 6.8.1.  The
debian shootout is showing a huge gap between ghc 6.6 and g++ but I am not
seeing that gap.  One concern though is that the code doesn't look very
haskellish. So much pointer manip.

For the nbody c++ code I am getting:
-0.169075164
-0.169031665

real0m11.168s
user0m10.891s
sys 0m0.043s

and for the nbody haskell code I am getting:
-0.169075164
-0.169031665

real0m11.595s
user0m11.422s
sys 0m0.044s


On Nov 26, 2007 8:21 PM, Don Stewart [EMAIL PROTECTED] wrote:

 s.clover:
  In some spare time over the holidays I cooked up three shootout
  entries, for Fasta, the Meteor Contest, and Reverse Complement. I

 Yay!

  First up is the meteor-contest entry.
 
  http://shootout.alioth.debian.org/gp4/benchmark.php?
  test=meteorlang=ghcid=5
 
  This is the clear win of the bunch, with significantly improved time
  thanks to its translation of the better algorithm from Clean.

 Well done! Though looks like we'll have to follow the C++ implementation
 to be really competitive.

  Next is reverse-complement.
 
  http://shootout.alioth.debian.org/gp4/benchmark.php ?
  test=revcomplang=ghcid=3

 Very good. I'm glad someone looked at that, since the old code was
 moderately naive (first bytestring effort).

  Finally, there's fasta.
 
  http://shootout.alioth.debian.org/gp4/benchmark.php?
  test=fastalang=ghcid=2

 Yeah, we should do something better here. Hmm.

  p.s. It looks like they've depreciated chameneos in favor of a new
  version, chameneos-redux. As this was one of the places Haskell
  really rocked the competition, it would probably be worth updating

 Definitely. I note also we're beating Erlang on the new thread-ring
 benchmark too,


 http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadringlang=all

  the Haskell entry for the new benchmark. Also, the n-bodies benchmark
  seems like another that could be much improved.

 Yeah, that's a hard one.

 -- Don
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] A tale of three shootout entries

2007-11-27 Thread Ryan Dickie
Never mind. I screwed up the timings.
The new haskell timings are still a huge improvement but they are:

-0.169075164
-0.169031665

real0m27.196s
user0m19.688s
sys 0m0.163s


On Nov 27, 2007 11:25 AM, Ryan Dickie [EMAIL PROTECTED] wrote:

 Oops forgot to hit reply-to-all.. resending..


 N-body is looking good. I am running and amd64 3000+ on ghc 6.8.1.  The
 debian shootout is showing a huge gap between ghc 6.6 and g++ but I am not
 seeing that gap.  One concern though is that the code doesn't look very
 haskellish. So much pointer manip.

 For the nbody c++ code I am getting:
 -0.169075164
 -0.169031665

 real0m11.168s
 user0m10.891s
 sys 0m0.043s

 and for the nbody haskell code I am getting:
 -0.169075164
 -0.169031665

 real0m11.595s
 user0m11.422s
 sys 0m0.044s


 On Nov 26, 2007 8:21 PM, Don Stewart [EMAIL PROTECTED] wrote:

  s.clover:
   In some spare time over the holidays I cooked up three shootout
   entries, for Fasta, the Meteor Contest, and Reverse Complement. I
 
  Yay!
 
   First up is the meteor-contest entry.
  
   http://shootout.alioth.debian.org/gp4/benchmark.php?
   test=meteorlang=ghcid=5
  
   This is the clear win of the bunch, with significantly improved time
   thanks to its translation of the better algorithm from Clean.
 
  Well done! Though looks like we'll have to follow the C++ implementation
 
  to be really competitive.
 
   Next is reverse-complement.
  
   http://shootout.alioth.debian.org/gp4/benchmark.php ?
   test=revcomplang=ghcid=3
 
  Very good. I'm glad someone looked at that, since the old code was
  moderately naive (first bytestring effort).
 
   Finally, there's fasta.
  
   http://shootout.alioth.debian.org/gp4/benchmark.php?
   test=fastalang=ghcid=2
 
  Yeah, we should do something better here. Hmm.
 
   p.s. It looks like they've depreciated chameneos in favor of a new
   version, chameneos-redux. As this was one of the places Haskell
   really rocked the competition, it would probably be worth updating
 
  Definitely. I note also we're beating Erlang on the new thread-ring
  benchmark too,
 
 
  http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadringlang=all
 
   the Haskell entry for the new benchmark. Also, the n-bodies benchmark
   seems like another that could be much improved.
 
  Yeah, that's a hard one.
 
  -- Don
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe
 



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] nbody (my own attempt) and performance problems

2007-11-27 Thread Ryan Dickie
I thought it would be a nice exercise (and a good learning experience) to
try and solve the nbody problem from the debian language shootout.
Unfortunately, my code sucks. There is a massive space leak and performance
is even worse. On the bright side, my implementation is purely functional.
My twist: I manually unrolled a few loops from the C++ version.

I believe that most of my performance problems stem from my abuse of tuple.
The bodies are passed as a tuple of planets, a planet is a tuple of
(position, velocity, mass) and the vectors position and velocity are also
tuples of type double. My lame justification for that is to make it nice and
handy to pass data around.

Any tips would be greatly appreciated.

--ryan
{--
	The Great Computer Language Shootout
	http://shootout.alioth.debian.org/

	N-body problem

	C version contributed by Christoph Bauer
	converted to C++ and modified by Paul Kitchin
	This crappy Haskell version by Ryan Dickie based on the above two
--}
import System
import Text.Printf

(a,b,c) .+ (x,y,z) = (a+x,b+y,c+z)
(a,b,c) .- (x,y,z) = (a-x,b-y,c-z)
x .* (a,b,c) = (x*a,x*b,x*c)
mag2 (x,y,z) = x*x + y*y + z*z
mag = sqrt . mag2

--some getter functions
pVel (_,vel,_) = vel
pPos (pos,_,_) = pos
pMass (_,_,mass) = mass

days_per_year = 365.24::Double
solar_mass = 4 * pi * pi::Double
delta_time = 0.01::Double

planets = (p1,p2,p3,p4,p5) where
	p1 = ((0,0,0),(0,0,0),solar_mass)
	p2 = (p2pos,p2vel,p2mass)
	p3 = (p3pos,p3vel,p3mass)
	p4 = (p4pos,p4vel,p4mass)
	p5 = (p5pos,p5vel,p5mass)
	p2pos = (4.84143144246472090e+00,-1.16032004402742839e+00,-1.03622044471123109e-01)
	p2vel = days_per_year .* (1.66007664274403694e-03, 7.69901118419740425e-03 ,-6.90460016972063023e-05)
	p2mass = 9.54791938424326609e-04 * solar_mass
	p3pos = (8.34336671824457987e+00,4.12479856412430479e+00,-4.03523417114321381e-01)
	p3vel = days_per_year .* (-2.76742510726862411e-03, 4.99852801234917238e-03, 2.30417297573763929e-05)
	p3mass = 2.85885980666130812e-04 * solar_mass
	p4pos = (1.28943695621391310e+01,-1.5514016986312e+01,-2.23307578892655734e-01)
	p4vel = days_per_year .* (2.96460137564761618e-03,2.37847173959480950e-03,-2.96589568540237556e-05)
	p4mass = 4.36624404335156298e-05 * solar_mass

	p5pos = (1.53796971148509165e+01,-2.59193146099879641e+01,1.79258772950371181e-01)
	p5vel = days_per_year .* (2.68067772490389322e-03,1.62824170038242295e-03,-9.51592254519715870e-05)
	p5mass = 5.15138902046611451e-05 * solar_mass


offset_momentum (p1,p2,p3,p4,p5) = (pp1,p2,p3,p4,p5) where
		pp1 = (pPos p1,ppvel,pMass p1) 
		ppvel =  (-1.0 / solar_mass) .* momentum
		momentum = (mul p2) .+ (mul p3) .+ (mul p4) .+ (mul p5)
		mul (_,vel,mass) = (mass .* vel)

--trick here is to unroll each loop.. based on the c++ version
advance ps = update $! adv4 $ adv3 $ adv2 $ adv1 ps

update (p1,p2,p3,p4,p5) = (up p1,up p2,up p3,up p4,up p5) where
	up (pos,vel,mass) = ((pos .+ (delta_time .* vel)),vel,mass)


adv4 (p1,p2,p3,p4,p5) = (p1,p2,p3,pp4,pp5) where
	il45 = innerLoop p4 p5
	pp4 = fst il45
	pp5 = snd il45

adv3 (p1,p2,p3,p4,p5) = (p1,p2,pp3,pp4,pp5) where
	il34 = innerLoop p3 p4
	il35 = innerLoop (fst il34) p5
	pp3 = fst il35
	pp4 = snd il34
	pp5 = snd il35

adv2 (p1,p2,p3,p4,p5) = (p1,pp2,pp3,pp4,pp5) where
	il23 = innerLoop p2 p3
	il24 = innerLoop (fst il23) p4
	il25 = innerLoop (fst il24) p5
	pp2 = fst il25
	pp3 = snd il23
	pp4 = snd il24
	pp5 = snd il25

adv1 (p1,p2,p3,p4,p5) = (pp1,pp2,pp3,pp4,pp5) where
	il12 = innerLoop p1 p2
	il13 = innerLoop (fst il12) p3
	il14 = innerLoop (fst il13) p4
	il15 = innerLoop (fst il14) p5
	pp1 = fst il15
	pp2 = snd il12
	pp3 = snd il13
	pp4 = snd il14
	pp5 = snd il15

innerLoop p1 p2 = (pp1,pp2) where
	difference = (pPos p1) .- (pPos p2)
	distance_squared = mag2 difference
	distance = sqrt distance_squared
	magnitude = delta_time / (distance * distance_squared)
	planet2_mass_magnitude = (pMass p2) * magnitude
	planet1_mass_magnitude = (pMass p1) * magnitude
	pp1 = (pPos p1, (pVel p1) .- (planet2_mass_magnitude .* difference), pMass p1)
	pp2 = (pPos p2, (pVel p2) .+ (planet1_mass_magnitude .* difference), pMass p2)


energy (p1,p2,p3,p4,p5) = sum2 where
	sum2 = loop5 + loop4 + loop3 + loop2 + loop1
	loop0 (pos,vel,mass) = (0.5) * mass * (mag2 vel)
	loop5 = (loop0 p5)
	loop4 = (loop0 p4) - (delE p4 p5)
	loop3 = (loop0 p3) - (delE p3 p5) - (delE p3 p4)
	loop2 = (loop0 p2) - (delE p2 p5) - (delE p2 p4) - (delE p2 p3)
	loop1 = (loop0 p1) - (delE p1 p5) - (delE p1 p4) - (delE p1 p3) - (delE p1 p2)
	delE (pos,_,mass) (pos2,_,mass2) = (mass * mass2) / (mag (pos .- pos2))

runIt 0 args = args
runIt cnt args = runIt (cnt-1) $! advance args
--runIt cnt args = advance $ runIt (cnt-1) args

main :: IO()
main = do
	n - getArgs = readIO.head
	let ps = offset_momentum planets
	let results = runIt n ps
	printf %.9f\n (energy ps)
	printf %.9f\n (energy results)

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell

Re: [Haskell-cafe] nbody (my own attempt) and performance problems

2007-11-27 Thread Ryan Dickie
I sat down tonight and did tons of good learning (which was my goal). Yes,
the variable names in the unrolling is a little ugly but it helps to read
the C++ version for context. There are two for loops (advN is each inner one
unrolled). the other function names match the C++ version.  It was my goal
to implement an unrolled version of that.

Fortunately, my performance is excellent now. It is only 4x slower than the
C++ version and 2x slower than the Haskell one listed (which uses pointer
trickery). I am sure there could be more done but I am at my limit of
comprehension. But if I may guess, I would say that any speed issues now are
related to a lack of in place updating for variables and structures.

I'd also like to thank everyone for their help so far. I have attached my
latest version.

--ryan

On Nov 27, 2007 7:14 PM, Sterling Clover [EMAIL PROTECTED] wrote:

 The first step would be profiling -- i.e. compiling with -prof -auto-
 all to tag each function as a cost center, then running with +RTS -p
 to generate a cost profile. The problem here is you've got massive
 amounts of unrolling done already, so it's sort of hard to figure out
 what's doing  what, and the names you've given the unrolled functions
 are... less than helpful. (first rule of optimization: optimize
 later.)  The use of tuples shouldn't be a problem per se in terms of
 performance, but it really hurts readability to lack clear type
 signatures and types. You'd probably be better off constructing a
 vector data type as does the current Haskell entry -- and by forcing
 it to be strict and unboxed (you have nearly no strictness
 annotations I note -- and recall that $! only evaluates its argument
 to weak head normal form, which means that you're just checking if
 the top-level constructor is _|_) you'll probably get better
 performance to boot. In any case, declaring type aliases for the
 various units you're using would also help readability quite a bit.

 --S

 On Nov 27, 2007, at 5:41 PM, Ryan Dickie wrote:

  I thought it would be a nice exercise (and a good learning
  experience) to try and solve the nbody problem from the debian
  language shootout. Unfortunately, my code sucks. There is a massive
  space leak and performance is even worse. On the bright side, my
  implementation is purely functional. My twist: I manually unrolled
  a few loops from the C++ version.
 
  I believe that most of my performance problems stem from my abuse
  of tuple. The bodies are passed as a tuple of planets, a planet is
  a tuple of (position, velocity, mass) and the vectors position and
  velocity are also tuples of type double. My lame justification for
  that is to make it nice and handy to pass data around.
 
  Any tips would be greatly appreciated.
 
  --ryan
  nbody3.hs
  ___
  Haskell-Cafe mailing list
  Haskell-Cafe@haskell.org
  http://www.haskell.org/mailman/listinfo/haskell-cafe


{-# OPTIONS_GHC -O2 -fbang-patterns -funbox-strict-fields #-}
{--
	The Great Computer Language Shootout
	http://shootout.alioth.debian.org/

	N-body problem

	C version contributed by Christoph Bauer
	converted to C++ and modified by Paul Kitchin
	Haskell version by Ryan Dickie based on the above two
	With great help from ddarius, Spencer Janssen, and #haskell
--}
import System
import Text.Printf

data Vector3 = V !Double !Double !Double

V a b c .+ V x y z = V (a+x) (b+y) (c+z)
V a b c .- V x y z = V (a-x) (b-y) (c-z)
x .* V a b c = V (x*a) (x*b) (x*c)
mag2 !(V x y z) = x*x + y*y + z*z

--some getter functions
pVel !(_,vel,_) = vel
pPos !(pos,_,_) = pos
pMass !(!_,!_,!mass) = mass

days_per_year = 365.24::Double
solar_mass = (4::Double) * pi * pi
delta_time = 0.01::Double

planets = ( p1,p2,p3,p4,p5 ) where
	p1 = ( V 0 0 0, V 0 0 0, solar_mass )
	p2 = ( p2pos,p2vel,p2mass )
	p3 = ( p3pos,p3vel,p3mass )
	p4 = ( p4pos,p4vel,p4mass )
	p5 = ( p5pos,p5vel,p5mass )
	p2pos =  V 4.84143144246472090e+00 (-1.16032004402742839e+00) (-1.03622044471123109e-01)
	p2vel = days_per_year .* V 1.66007664274403694e-03 7.69901118419740425e-03 (-6.90460016972063023e-05)
	p2mass = 9.54791938424326609e-04 * solar_mass
	p3pos = V 8.34336671824457987e+00 4.12479856412430479e+00 (-4.03523417114321381e-01)
	p3vel = days_per_year .* V (-2.76742510726862411e-03) 4.99852801234917238e-03 2.30417297573763929e-05
	p3mass = 2.85885980666130812e-04 * solar_mass
	p4pos = V 1.28943695621391310e+01 (-1.5514016986312e+01) (-2.23307578892655734e-01)
	p4vel = days_per_year .* V 2.96460137564761618e-03 2.37847173959480950e-03 (-2.96589568540237556e-05)
	p4mass = 4.36624404335156298e-05 * solar_mass
	p5pos = V 1.53796971148509165e+01 (-2.59193146099879641e+01) 1.79258772950371181e-01
	p5vel = days_per_year .* V 2.68067772490389322e-03 1.62824170038242295e-03 (-9.51592254519715870e-05)
	p5mass = 5.15138902046611451e-05 * solar_mass

update (!p1,!p2,!p3,!p4,!p5) = ( up p1,up p2,up p3,up p4,up p5 ) where
	up (!pos,!vel,!mass) = ( (pos .+ (delta_time .* vel

Re: [Haskell-cafe] Data.Set.member vs Data.List.elem

2007-11-12 Thread Ryan Dickie
Perhaps this has something to due with uniqueness. A list can have many
duplicate elements while a set is supposed to be unique.

On Nov 12, 2007 2:48 PM, Neil Mitchell [EMAIL PROTECTED] wrote:

 Hi,

 Is there a good reason that Data.Set uses the name member while
 Data.List (or the Prelude) uses the name elem, for what to me seem
 identical concepts. I realise that in Set's the traditional test is
 for membership, but it seems awfully arbitrary that one jumped one
 way and one jumped the other. I've just written an entire module's
 worth of Haskell with Set.elem, as that felt right, now I'm going
 back and fixing it.

 Thanks

 Neil
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-11-02 Thread Ryan Dickie
On 11/2/07, Sterling Clover [EMAIL PROTECTED] wrote:

 As I understand it, the question is what you want to measure for.
 gzip is actually pretty good at, precisely because it removes
 boilerplate, reducing programs to something approximating their
 complexity. So a higher gzipped size means, at some level, a more
 complicated algorithm (in the case, maybe, of lower level languages,
 because there's complexity that's not lifted to the compiler). LOC
 per language, as I understand it, has been somewhat called into
 question as a measure of productivity, but there's still a
 correlation between programmers and LOC across languages even if it
 wasn't as strong as thought -- on the other hand, bugs per LOC seems
 to have been fairly strongly debunked as something constant across
 languages. If you want a measure of the language as a language, I
 guess LOC/gzipped is a good ratio for how much noise it introduces
 -- but if you want to measure just pure speed across similar
 algorithmic implementations, which, as I understand it, is what the
 shootout is all about, then gzipped actually tends to make some sense.

 --S
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


Lossless File compression, AKA entropy coding, attempts to maximize the
amount of information per bit (or byte) to be as close to the entropy as
possible. Basically, gzip is measuring (approximating) the amount of
information contained in the code.

I think it would be interesting to compare the ratios between raw file size
its entropy (we can come up with a precise metric later). This would show us
how concise the language and code actually is.

--ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: Why can't Haskell be faster?

2007-10-31 Thread Ryan Dickie
So in a few years time when GHC has matured we can expect performance to be
on par with current Clean? So Clean is a good approximation to peak
performance?

--ryan

On 10/31/07, Don Stewart [EMAIL PROTECTED] wrote:

 ndmitchell:
  Hi
 
  I've been working on optimising Haskell for a little while
  (http://www-users.cs.york.ac.uk/~ndm/supero/), so here are my thoughts
  on this.  The Clean and Haskell languages both reduce to pretty much
  the same Core language, with pretty much the same type system, once
  you get down to it - so I don't think the difference between the
  performance is a language thing, but it is a compiler thing. The
  uniqueness type stuff may give Clean a slight benefit, but I'm not
  sure how much they use that in their analyses.
 
  Both Clean and GHC do strictness analysis - I don't know which one
  does better, but both do quite well. I think Clean has some
  generalised fusion framework, while GHC relies on rules and short-cut
  deforestation. GHC goes through C-- to C or ASM, while Clean has been
  generating native code for a lot longer. GHC is based on the STG
  machine, while Clean is based on the ABC machine - not sure which is
  better, but there are differences there.
 
  My guess is that the native code generator in Clean beats GHC, which
  wouldn't be too surprising as GHC is currently rewriting its CPS and
  Register Allocator to produce better native code.

 Yes, this was my analysis too -- its in the native code gen. Which is
 perhaps the main GHC bottleneck now.

 -- Don
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] newbie optimization question

2007-10-28 Thread Ryan Dickie
One thing I've noticed is that turning on optimizations significantly
increases the speed of haskell code. Are you comparing code between
languages with -O2 or without opts?

On 10/28/07, Prabhakar Ragde [EMAIL PROTECTED] wrote:

 For the purposes of learning, I am trying to optimize some variation of
 the following code for computing all perfect numbers less than 1.

 divisors i = [j | j-[1..i-1], i `mod` j == 0]
 main = print [i | i-[1..1], i == sum (divisors i)]

 I know this is mathematically stupid, but the point is to do a moderate
 nested-loops computation. On my 2.33GHz dual-core MacBookPro, the
 obvious C program takes about .3 seconds, and a compiled OCaML program
 (tail recursion, no lists) about .33 seconds. The above takes about 4
 seconds.

 I've tried using foldl', and doing explicit tail recursion with strict
 accumulators, but I can't get the running time below 3 seconds. Is it
 possible to come within striking distance of the other languages?
 Thanks. --PR
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] haskell on llvm?

2007-09-14 Thread Ryan Dickie
I could see it as a useful abstraction instead of directly generating
assembly. To me the idea behind llvm seems nice and clean and academic to a
certain degree. It can see it as something to look out for in the future.

On 9/13/07, brad clawsie [EMAIL PROTECTED] wrote:

 has anyone ever considered using llvm as a infrastructure for haskell
 compilation? it wold seem people are looking at building frontends for
 scheme, ocaml, etc. i don't know if an alternate backend is
 appropriate, but it would seem to be an interesting way to aggregate
 the best thinking for various optimizations over a more diverse group
 of developers.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] howto install ghc-6.7.* ?

2007-08-11 Thread Ryan Dickie
Same problem here. I downloaded the ghc-6.7.20070811.tar.bz2 snapshot build
on amd64 under ubuntu.

From the README
 The sh boot step is only necessary if this is a tree checked out
 from darcs.  For source distributions downloaded from GHC's web site,
 this step has already been performed.



On 8/11/07, Marc A. Ziegert [EMAIL PROTECTED] wrote:

 i just don't get it.
 please, can anybody explaim me how to do that?
 i tried it the last few days with ghc-6.7.20070807, ghc-6.7.20070809, and
 ghc-6.7.20070810.
 it always results in a broken library (without Prelude):

 # ghc-pkg list
 /usr/local/lib/ghc-6.7.20070810/package.conf:
 {ghc-6.7.20070810}, rts-1.0

 i did this on my gentoo-i386-box (pretty old, takes 1h for quick build,
 3.5h without mk/build.mk):

 T=20070810
 tar xjf ghc-6.7.$T-src.tar.bz2
 tar xjf ghc-6.7.$T-src-extralibs.tar.bz2
 cd ghc-6.7.$T
 (
 #echo BuildFlavour = quick
 #cat mk/build.mk.sample
 echo HADDOCK_DOCS = YES
 )  mk/build.mk
 ./configure  ( time nice -n 19 make all install )


 those extralibs seem to be installed in
 /usr/local/lib/ghc-6.7.20070810/lib/
 but registered in
 ghc-6.7.20070810/driver/package.conf.inplace
 instead of
 /usr/local/lib/ghc-6.7.20070810/package.conf
 .


 - marc

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] OS design FP aesthetics

2007-06-18 Thread Ryan Dickie

On 6/18/07, Creighton Hogg [EMAIL PROTECTED] wrote:




On 6/18/07, Andrew Coppin [EMAIL PROTECTED] wrote:

 Creighton Hogg wrote:
  Well, since we're on the subject and it's only the Cafe list, what is
  it that you find messy about Linux that you would want to be solved by
  some hypothetical Haskell OS?

 This is drifting off-topic again, but here goes...


Yeah, so I'll just split this off into a different thread.

There are lots of things to like about Linux. It doesn't cost money.
 It's fast. It's reliable. It's flexible. It's secure.


Okay, I'm not sure if I'd agree with the reliable  secure points.  I
mean, relative to what could be done.  I'm a rank amateur when it comes to
OS work but when I've looked at recent papers Linux really isn't that
cutting edge.  I mean, it may be reliable in comparison to Windows 98  has
less known exploits than any Windows system, but in terms of how good it
*could* be I think there's an awful lot of room for growth.

However,
 unfortunately it's still Unix. In other words, it's a vast incoherant
 mess of largely incompatible ad-hoc solutions to individual problems
 implemented independently by unrelated hackers over the 40+ years of
 history that this software has been around. New software has to emulate
 quirks in old software, and client programs work around the emulated
 quirks in the new software to get the functionallity it actually wants.
 One vast tangled mess of complexity and disorder. Exhibit A: Package
 managers exist. Exhibit B: Autoconf exists. I rest my case.


Okay, but these don't seem to really be design flaws so much as the
inevitable results of age and the need for backwards compatibility.  I'm
looking more for technical problems that you would want to see fixed in our
magical UberOS.

An operating system should have a simple, clear, consistent design. Not
 unlike a certain programming language named after a dead mathematition,
 come to think of it...

 (Have you ever programmed in C? You can certainly see where Unix gets
 its features from - terse, cryptic and messy.)


This is another thing we're just going to disagree on.  I think C++ is a
pretty messy language, but feel that straight up C is rather simple and
elegant.  I had only used C++ before, but a friend rather easily convinced
me that C is in fact a very sexy language when used in its intended design
space.

Still, I don't have the skill to write a functioning operating system -
 much less one that's ready for the desktop - so that's that I
 suppose...

 (I did seriously investigate the task once. Indeed, I got as far as
 writing a bootloader. It worked too!)


Would you mind sharing the code?  I'd be interested.



___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe



While this isn't an operating system written in a functional programming
language it is quite an important part of one.
NixOS is an experiment to see if we can build an operating system in which
software packages, configuration files, boot scripts and the like are all
managaed in a purely functional way, that is, they are all built by
deterministic functions and they never change after they have been built.,
from http://nix.cs.uu.nl/nixos/index.html

One thing microsoft has being doing which is interesting is singularity. It
is a research OS done in .NET and is completely managed. It will be
interesting to see the effects of a managed runtime environment and quite
possibly open the door for a functional language to target the runtime.

I think operating systems, and software design in general, will be headed
towards integrating functional techniques from languages like Haskell into C
and C++. Google's map/reduce paper is an excellent example but so is Tim
Sweeney's talk on the future of video game design.

I suppose more importantly.. would haskell kernel be done as a microkernel
or a monolithic kernel ;-) Marketing it would be hard. Who would want to buy
a lazy os?

--Ryan Dickie
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] An interesting toy

2007-05-05 Thread Ryan Dickie

Sounds like a neat program. I'm on a laptop right now but i'll check it out
later.
The reason I am mailling is because you can use mencoder to convert a stream
of image files into a video file.

http://www.mplayerhq.hu/DOCS/HTML/en/menc-feat-enc-images.html

--ryan

On 5/5/07, Andrew Coppin [EMAIL PROTECTED] wrote:


 Greetings.

I have something which you might find mildly interesting. (Please don't
attempt the following unless you have some serious CPU power available, and
several hundred MB of hard drive space free.)

  darcs get http://www.orphi.me.uk/darcs/Chaos
  cd Chaos
  ghc -O2 --make System1
  ./System1

On my super-hyper-monster machine, the program takes an entire 15 minutes
to run to completion. When it's done, you should have 500 images sitting in
front of you. (They're in PPM format - hence the several hundred MB of disk
space!) The images are the frames that make up an animation; if you can find
a way to play this animation, you'll be treated to a truely psychedelic
light show! (If not then you'll just have to admire them one at a time. The
first few dozen frames are quite boring by the way...)

If you want to, you can change the image size. For example, ./System1
800 will render at 800x800 pixels instead of the default 200x200. (Be
prepaired for *big* slowdowns!)

*What is it?*

Well, it's a physical simulation of a chaos pendulum. That is, a
magnetic pendulum suspended over a set of magnets. The pendulum would just
swing back and forth, but the magnets perturb its path in complex and
unpredictable ways.

However, rather than simulate just 1 pendulum, the program simulates
40,000 of them, all at once! For each pixel, a pendulum is initialised with
a velocity of zero and an initial position corresponding to the pixel
coordinates. As the pendulums swing, each pixel is coloured according to the
proximity of the corresponding pendulum to the tree magnets.

*Help requested...*

Can anybody tell me how to make the program go faster?

I already replaced all the lists with IOUArrays, which resulted in big,
big speedups (and a large decrease in memory usage). But I don't know how to
make it go any faster. I find it worrying that the process of converting
pendulum positions to colours appears to take significantly longer than the
much more complex task of performing the numerical integration to discover
the new pendulum positions. Indeed, using GHC's profiling tools indicates
that the most time is spent executing the function quant8. This function
is defined as:

  quant8 :: Double - Word8
  quant8 = floor . (0xFF *)

I can't begin to *imagine* how *this* can be the most compute-intensive
part of the program when I've got all sorts of heavy metal maths going on
with the numerical integration and so forth...! Anyway, if anybody can tell
me how to make it run faster, I'd be most appriciative!

Also, is there an easy way to make the program use *both* of the CPUs in
my PC? (Given that the program maps two functions over two big IOUArrays...)

Finally, if anybody has any random comments about the [lack of] qualify in
my source code, feel free...


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Debugging

2007-05-04 Thread Ryan Dickie

On 5/4/07, Monang Setyawan [EMAIL PROTECTED] wrote:


On 5/5/07, Stefan O'Rear [EMAIL PROTECTED] wrote:
 On Sat, May 05, 2007 at 11:36:16AM +0700, Monang Setyawan wrote:
  Hi, I'm a beginner Haskell user.
 
  Is there any way to trace/debug the function application in GHC?

 Absolutely!

 [EMAIL PROTECTED]:/tmp$ ghci X.hs
___ ___ _
   / _ \ /\  /\/ __(_)
  / /_\// /_/ / /  | |GHC Interactive, version 6.7.20070502, for
Haskell 98.
 / /_\\/ __  / /___| |http://www.haskell.org/ghc/
 \/\/ /_/\/|_|Type :? for help.

 Loading package base ... linking ... done.
 [1 of 1] Compiling Main ( X.hs, interpreted )
 Ok, modules loaded: Main.
 *Main :break fac

Great!! Thanks, it really helps.
I should update my GHC to the newest version (I use the old  6.4.2
with no break command)

Is there any editor/IDE supporting this break command? It should be
cooler if we can debug functions just by placing mark in the line.


 Stefan



--
Demi masa..


--
Demi masa..
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




I've only written trivial applications and functions in haskell. But the
title of this thread got me thinking.

In an imperative language you have clear steps, states, variables to watch,
etc.
What techniques/strategies might one use for a functional language?

--ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is Excel a FP language?

2007-04-25 Thread Ryan Dickie

On 4/24/07, Tony Morris [EMAIL PROTECTED] wrote:


In a debate I proposed Excel is a functional language. It was refuted
and I'd like to know what some of you clever Haskellers might think :)

My opposition proposed (after some weeding out) that there is a
distinction between Excel, the application, the GUI and Excel, the
language (which we eventually agreed (I think) manifested itself as a
.xls file). Similarly, VB is both a language and a development
environment and referring to VB is a potential ambiguity. I disagree
with this analogy on the grounds that the very definition of Excel
(proposed by Microsoft) makes no distinction. Further, it is impossible
to draw a boundary around one and not the other.

I also pointed to the paper by Simon Peyton-Jones titled, Improving the
world's most popular functional language: user-defined functions in
Excel, which quite clearly refers to Excel as a [popular] functional
language.

The debate started when I referred to the fact that financial
institutions change their functional language from Excel to something
like OCaml or Haskell. Of course, there is no doubting that these
companies can replace their entire use of Excel with a functional
language, which I think is almost enough to fully support my position
(emphasis on almost).


--
Tony Morris
http://tmorris.net/

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe




Okay.. Excel consists of some c/c++ code, some visual basic, and some sort
of cell evaluation engine. The c/c++ and vb are definitely not functional.

Is the cell evaluation engine one? I think not. I do not believe what you
can type into those cells does constitute a programming language, or at
least not a turing complete one. As far as i know only simple calculations
can be performed. For example, is there any way to evaluate f(2) = 0, f(x) =
5 without invoking vba (how does vba affect the dynamic?). As far as i
understand you can compose functions by stringing cells together, higher
level functions or values, but the contents of the cells themselves are
heavily restricted.

I am obviously no Excel guru but I believe that if you can prove it a
programming language then you can probably prove that it is a functional
one.

--ryan
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there a best *nix or BSD distro for Haskell hacking?

2007-04-22 Thread Ryan Dickie

My vote goes to ubuntu. I've been using it for a few years and before that I
tried a wide variety of distros. Ubuntu has a lot of polish, takes 20
minutes to install, and is just a really nice distribution overall. Things
just work. Ubuntu is debian based so if you chose against ubuntu my second
vote goes for debian.

Many of the haskell packages including darcs, ghc, and well over 100 other
packages (mostly libraries) are in the package manager ready to be
installed.

--ryan

On 4/22/07, David Cabana [EMAIL PROTECTED] wrote:


I have a spare Windows machine I want to put to better use.  I want
to turn it into a Haskell hacking box, and was wondering whether any
particular *nix or BSD distribution is best (or worst) suited for
this.  Any thoughts?

Thank you,
David Cabana
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Is there a best *nix or BSD distro for Haskell hacking?

2007-04-22 Thread Ryan Dickie

I'm running feisty.
[EMAIL PROTECTED]:~$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 6.6

--ryan

On 4/22/07, Dougal Stanton [EMAIL PROTECTED] wrote:


On 22/04/07, Ryan Dickie [EMAIL PROTECTED] wrote:

 Many of the haskell packages including darcs, ghc, and well over 100
other
 packages (mostly libraries) are in the package manager ready to be
 installed.

The problem with Ubuntu (at least until the Feisty release a few days
ago?) was that GHC wasn't up-to-date by default; it came with 6.4.
Moving to 6.6 isn't a difficult feat (the generic binaries from the
GHC site seem to work fine for Edgy, if you install libreadline too)
but being behind that curve is noticeable. If you want to stay on the
cutting edge Gentoo takes a lot of the hassle out of it, since you can
use the repository stored on haskell.org. The downside is having to
keep the rest of the system updated if you've got a slow machine.

You pays your money, etc.

D.
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Fwd: [Haskell-cafe] Tutorial on Haskell

2007-04-16 Thread Ryan Dickie

Blast.. i didn't hit reply all so here's a forward of my mail to the
group...

--ryan
-- Forwarded message --
From: Ryan Dickie [EMAIL PROTECTED]
Date: Apr 16, 2007 4:24 PM
Subject: Re: [Haskell-cafe] Tutorial on Haskell
To: Simon Peyton-Jones [EMAIL PROTECTED]

I can tell you what me and my colleagues would be interested in (though none
of us are actually going). We code a lot of math. You may call it scientific
computing. Haskell seems like a natural fit for the task.

In particular we are interested in:
1) the type system
2) concurrency (can these be set to run on a large system)
3) simple relation between what equations we write on paper, and what
equations we write in haskell.

I'm still a n00b to Haskell. For us languages like matlab, maple, etc. do
not fit the job very well and run too slowly. C/C++ is usually what i use
but it can be a pain. Python, etc... well its good for the glue i suppose.
Haskell might fit that niche.

On 4/16/07, Simon Peyton-Jones [EMAIL PROTECTED] wrote:


Friends

I have agreed to give a 3-hr tutorial on Haskell at the Open Source
Convention 2007
http://conferences.oreillynet.com/os2007/

I'm quite excited about this: it is a great opportunity to expose Haskell
to a bunch of smart folk, many of whom won't know much about Haskell.  My
guess is that they'll be Linux/Perl/Ruby types, and they'll be practitioners
rather than pointy-headed academics.

One possibility is to do a tutorial along the lines of here's how to
reverse a list, here's what a type is etc; you know the kind of
thing.  But instead, I'd prefer to show them programs that they might
consider *useful* rather than cute, and introduce the language along the
way, as it were.

So this message is to ask you for your advice.  Many of you are exactly
the kind of folk that come to OSCON --- except that you know Haskell.   So
help me out:

Suggest concrete examples of programs that are
* small
* useful
* demonstrate Haskell's power
* preferably something that might be a bit
tricky in another language

For example, a possible unifying theme would be this:
http://haskell.org/haskellwiki/Simple_unix_tools

Another might be Don's cpu-scaling example

http://cgi.cse.unsw.edu.au/~dons/blog/2007/03/10http://cgi.cse.unsw.edu.au/%7Edons/blog/2007/03/10

But there must be lots of others.  For example, there are lots in the blog
entries that Don collects for the Haskell Weekly Newsletter.  But I'd like
to use you as a filter: tell me your favourites, the examples you find
compelling.  (It doesn't have to be *your* program... a URL to a great blog
entry is just fine.)  Of course I'll give credit to the author.

Remember, the goal is _not_ explain monads.  It's Haskell is a great
way to Get The Job Done.

Thanks!

Simon
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] Re: Why Perl is more learnable than Haskell

2007-04-11 Thread Ryan Dickie

I thought I could resist this thread but I'll bite =:-()

The first language i learned was basic. No real functions, simple step by
step instructions. I then learned hypercard, c, c++, python, assembly, vhdl,
and too many others!

Now i've decided to learn haskell. I view it as a mathematicians language. I
do research in the field of medical imaging.. particularly processing large
cardiac data sets to figure out characteristics, diseases, etc. Why workflow
generally starts out as 1) mathematical idea 2) turn that pure equation into
a numerical recipe 3) implement, debug 5) analyze 6) goto step 1. I find
doing my thinking in the continous domain makes things a lot easier.

But here's where i differ from everyone else. I already have the
mathematical relationships all nice and tidy (hopefully!) in my head before
i start. I literally just implement it. I don't want to care about
threading, IO, message passing, or numerical stability. I have to care about
performance but only so far as it hampers my productivity. Preferably the
language will do it implicitly.

Your average programmer wants a language to do tasks. Having to think about
the math and relationships behind it all is rather sickening to them (and me
too!).

I am a new haskell programmer (basically a week into it!). It is by far the
hardest language i've had to pick up. A lot of my code could be structured
in a functional way.. but almost all reply on looping techniques (would take
a lot of work to rethink my gradient descent method and make it fast!).
Regardless of actually using haskell.. i like to transfer these techniques
to c++. Before i even knew of haskell i knew some FP methods and i found
that using these shrunk my code, shrunk the bugs, and did nice things for
performance + concurrency. In fact, I just read a google paper on their
batch system. They use to functions: map and reduce. They can easily split
it up over their cluster etc... these are the ideas of FP that i like! That
and the set-builder notation.

I also hate matlab to death. Is there any possibility of using haskell as a
replacement using ghci? Mostly i care about linalg when it comes to using
matlab.

ps: sorry if gmail butchered this reply. I had subscribed to the digest and
turns out that was a mistake :D
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe