Re: [Haskell-cafe] I just don't get it (data structures and OO)

2007-06-03 Thread Vincent Kraeutler
Phlex wrote:
 This is very informative, and easier to grasp for such a newbie as me.
 So the idea is to take the changing function down the chain, i knew
 this couldn't be that hard !
 Still this requires indeed to think different, I guess i'm up for
 quite a few exercises in order to wrap my mind around this.

 That's the kind of information that's missing from all these tutorials
 i found around the web.

 Thank you,
 Sacha
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe



for what it's worth, i find it easiest to think about it this way:

assume you want to update a planet, i.e. you want to apply
changeP :: Planet - Planet
to one planet in your system.

the important step is then to invoke wishful thinking (viz. SICP), and
assume you can partition your data structure into two parts, the one
that stays constant, and the one that is changed, i.e.

partition :: Universe - (Rest, Planet)

with a corresponding inverse operation

recombine :: (Rest, Planet) - Universe

So after the partitioning, updating the planet of interest is very easy
to accomplish:

changeP' (rest, planet) = (rest, changeP planet)

rolling the partitioning, updating and recombination into one, we get

update u = recombine (changeP' (partition u))

The second step is then to find out how to do the partitioning and
recombination easily and efficiently. For one very generic way to do
this, i would recommend that you read up on the Zipper data structure [1-3].

kind regards,
v.

[1] http://en.wikibooks.org/wiki/Haskell/Zippers
[2]
http://www.st.cs.uni-sb.de/edu/seminare/2005/advanced-fp/docs/huet-zipper.pdf
[3] http://cgi.cse.unsw.edu.au/~dons/blog/2007/05/17#xmonad_part1b_zipper





signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] The C Equiv of != in Haskell miscommunication thread

2007-05-30 Thread Vincent Kraeutler
i would just like to say thank you for all the extensive replies. after
fiddling with them for an afternoon i'm positive i grokked the concept.

it's just too bad the nice wrapper concept from [1] does not seem to be
directly applicable to fix in haskell, since they require untyped
side-effects

anyhow, this has been very instructive.
thanks again!
v.

[1] http://citeseer.ist.psu.edu/mcadams01practical.html

Jason Dagit wrote:
 On 5/28/07, Donald Bruce Stewart [EMAIL PROTECTED] wrote:
 This thread should end, guys. It is inappropriate for the Haskell lists,
 and appears to have been a simple misunderstanding anyway.

 Thanks everyone. Please stay friendly!

 -- Don

 P.S. Have some cute code:

 Control.Monad.Fix.fix ((1:) . scanl (+) 1)

 Speaking of cute code, I'm fond of this:

 map length . List.group . Control.Monad.Fix.fix $ show

 And other (longer) variations which generate only powers of two.  It's
 a great conversation starter for teaching about fix.

 Jason
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe






signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cute code [was: The C Equiv of != in Haskell miscommunication thread]

2007-05-29 Thread Vincent Kraeutler
Donald Bruce Stewart wrote:
 P.S. Have some cute code:

 Control.Monad.Fix.fix ((1:) . scanl (+) 1)
   

this is cute indeed! (do you keep an emergency reserve of those around
for situations like this? ;-))

ever the interested amateur, i admittedly remain stumped by fix (there's
evidence i'm not the only one [1]), though a little digging turned up
two very nice links, which might be interesting for those who share my
situation (hence this post). namely, an old LtU thread [2], in which you
will find a short oleg-post [3] (i'd give it about a hundred
milli-olegs), and a paper on practical applications of Y [4]. it seems
that the examples given in the latter two (scheme and ML) are
essentially trivial to translate to haskell, so with the help of ghci, i
suppose i will finally get a grip on Y. ;-)

either way, if one of the Masters Of The Shadow Y Style on this list
feels like throwing in another koan or two, you'll have at least one
thankful audience member ;-)

kind regards,
v.

[1] http://www.haskell.org/pipermail/haskell-cafe/2007-March/023662.html
[2] http://lambda-the-ultimate.org/classic/message5463.html
[3] http://okmij.org/ftp/Computation/overriding-selfapplication.html
[4] http://citeseer.ist.psu.edu/mcadams01practical.html



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cute code [was: The C Equiv of != in Haskell miscommunication thread]

2007-05-29 Thread Vincent Kraeutler
Tomasz Zielonka wrote:
 On Tue, May 29, 2007 at 02:19:31PM +0200, Tomasz Zielonka wrote:
   
 On Tue, May 29, 2007 at 12:15:23PM +0200, Vincent Kraeutler wrote:
 
 ever the interested amateur, i admittedly remain stumped by fix (there's
 evidence i'm not the only one [1])
   
 The above code is equivalent to

 let l = 1 : scanl (+) 1 l in l

 which is a bit easier to decipher.

 The rest is maths and the subtleties of lazy evaluation.
 
 ... and these are the things you need to focus on to understand this
 code. In this case the use of fix is almost a small syntactic issue -
 you can eliminate it by inlining its definition.

 Best regards
 Tomek


   
i see that the definition of fix (from Control.Monad.Fix) could not be
any simpler:

 fix f = let x = f x in x

same goes for the type:

Prelude :t Control.Monad.Fix.fix
Control.Monad.Fix.fix :: (a - a) - a

it's just that i find it difficult to get concrete intellectual mileage
out of it.
i can reproduce results for specific examples (and even manipulate them
a bit), but feel like i'm missing something deep yet simple. say, i
would not know where and how to apply it. so obviously true
understanding is still missing. reminds me of my first encounters with
$H \psi = E \psi$. ;-)

most likely, i should just more carefully read the references i cited
myself ;-)

anyhow. if someone has a pedestrian's guide to the fixed point
operator lying around, a link would be much appreciated.

kind regards,
v.



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Cute code [was: The C Equiv of != in Haskell miscommunication thread]

2007-05-29 Thread Vincent Kraeutler
Vincent Kraeutler wrote:
 Tomasz Zielonka wrote:
   
 [snip]

 anyhow. if someone has a pedestrian's guide to the fixed point
 operator lying around, a link would be much appreciated.

   

i see that dons has very recently provided an answer for this on reddit:

http://programming.reddit.com/info/1uabt/comments

eternally indebted,
v.



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


[Haskell-cafe] haskell wiki indexing

2007-05-22 Thread Vincent Kraeutler
as was pointed out on the programming reddit [1], crawling of the
haskell wiki is forbidden, since http://www.haskell.org/robots.txt contains

User-agent: *
Disallow: /haskellwiki/

and indeed, a google search gives the old wiki
http://www.google.ch/search?q=haskell+wiki
i.e.
http://haskell.org/hawiki/FrontPage
rather than
http://haskell.org/haskellwiki/Haskell

in other words, it seems like the most relevant search engines do not
index the haskell wiki. i have reported this on #haskell about a week
ago[2], and this setting was acknowledged to be a measure deliberately
taken to curb excessive load from spiders crawling the wiki. i still
believe this a highly harmful stance, and just to make sure that whoever
may feel concerned is at least aware of the problem, i am now
cross-posting to this list as well.

kind regards,
v.

[1] http://programming.reddit.com/info/1qzjx/comments/c1r3ok
[2] http://tuukka.iki.fi/tmp/haskell-2007-05-16.html









signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Re: (Chaos) [An interesting toy]

2007-05-07 Thread Vincent Kraeutler
Andrew Coppin wrote:

[snip]
 Fact #1: I don't *know* of any other numerical integration algorithm.
 (I've heard of RK4, but it's too complicated for me to understand.)

higher-order runge-kutta algorithms typically have high integration
accuracy, but may require multiple energy/force evaluations per
integration step. also, they are generally not symplectic [1, 2], which
is believed to be rather undesirable if you're looking for long-term
stability in your simulation. stoermer/verlet/leap-frog is symplectic,
and requires only one force evaluation per time-step. it is therefore
_the_ default choice for numerical integration of the equations of
motion. it is really quite  simple [3, 4].

if you're _really_ looking for speed, you'll want to look at r-RESPA,
which is a generalization of leapfrog that allows for multiple-timestep
integration [5]. if the force evaluation is indeed the limiting step in
your computation (which i'm not sure is the case), you'll certainly want
to look into that. it basically boils down to splitting your interaction
function into a (rugged) short-range part (which is zero beyond some
distance) and which is evaluated every timestep, and a (smooth)
long-range part, which is evaluated only every so many steps.


 Fact #2: I have tried running the simulation with several different,
 non-comensurate time step values, and it always seems to produce the
 same output, so I'm reasonably confident there are no integration errors.
getting a good feeling for the maximum allowed timestep size is usually
the first step to getting performance out of your program.

as a rule of thumb, you'll want more than 10 steps per period of your
fastest degree of freedom. you can get a feeling for that by finding the
maximum second derivative of the potential energy function (which is
going to be close to one of your magnets, i suppose. speaking of which
-- does the energy diverge near the magnets? i think it does.).

to validate the above, you'll want to look at energy conservation --
compute the total (potential and kinetic) energy of your system at the
beginning and the end (for a given simulation length in terms of time,
not in terms of timesteps) of your simulation, and compare to the
starting energy. you'll find that there's a critical step size below
which your energy stays constant, and above which it starts to diverge
rather quickly. you might want to take that step size (and divide it by
half if you're cautious).

kind regards,
v.


[1] http://srv.chim.unifi.it/orac/MAN/node5.html

[2] http://mitpress.mit.edu/SICM/book-Z-H-57.html#%_chap_5

[3] http://einstein.drexel.edu/courses/CompPhys/Integrators/leapfrog/

[4] http://shootout.alioth.debian.org/gp4/benchmark.php?test=nbodylang=all

[5] http://cat.inist.fr/?aModele=afficheNcpsidt=15255925




signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] k-minima in Haskell

2007-04-12 Thread Vincent Kraeutler
Kurt Hutchinson wrote:
 On 4/12/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 To get the indices, use the Schwartzian transform:

 sortWith :: (Ord b) = (a - b) - [a] - [a]
 sortWith f = mergeRuns . runs
   where
 runs = map (:[])

 mergeRuns [] = []
 mergeRuns [xs] = xs
 mergeRuns xss = mergeRuns (mergeRun xss)

 mergeRun (xs1:xs2:xss) = mergeOne xs1 xs2 : mergeRun xss
 mergeRun xss = xss

 mergeOne [] ys = ys
 mergeOne xs [] = xs
 mergeOne xs'@(x:xs) ys':(y:ys)
 = case compare (f x) (f y) of
 LT - x : mergeOne xs ys'
 GT - y : mergeOne xs' ys
 EQ - x : y : mergeOne xs ys

 getKMinima :: (Ord a) = [a] - [Int]
 getKMinima k = map fst . take k . sortWith snd . zip [0..]

 For my own edification, what is the benefit of this sortWith over sortBy?

 f `on` g = \ x y - f ( g x ) ( g y )
 kminima k = map fst . take k . sortBy ( compare `on` snd ) . zip [0..]


possibly related (newbie question):

pairs are instances of Ord, why not directly sort those (implying the
item to be sorted is fst):

 kminima k = \list - map snd $ take k $ sort $ zip list [0..]





signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
[EMAIL PROTECTED]
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Application of iterate

2007-03-23 Thread Vincent Kraeutler
Henning Thielemann wrote:
 On Fri, 23 Mar 2007, Neil Mitchell wrote:

   
 my_sqrt t = last (take 20 (iterate (\n - n/2 + t/(2 * n)) t))
   

 http://darcs.haskell.org/htam/src/Numerics/ZeroFinder/Newton.hs

 with
   inverse t (\x-(x^2,2*x)) t

   

 I don't know how to add it into the one liner though - although I
 suspect someone here will :)
 


 http://darcs.haskell.org/htam/src/Numerics/Sequence.hs

 limitDifference :: (Real a) = a - [a] - a
 limitDifference tol xs =
case dropWhile (( tol) . abs . uncurry (-)) $ zip xs (tail xs) of
   ((_,x):_) - x
   [] - error limitDifference: Finite sequence, but no element satisfies 
 abort criterion.
 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


   
as a fellow haskell newbie, i would like to take this opportunity to
encourage library authors to submit their material to the haskell hackage.

the haskell infrastructure is rather big, complex and organic. (see
e.g. http://darcs.haskell.org/tmp/ for an example) it was only by chance
that i ran across the (rather amazing) htam library a week ago or so. i
hope you agree that these things should not be left to chance ;-)

hoogle, google and hackage tend to be the first places where i look for
fresh meat. i suppose this also applies to other haskell freshmen.

cheers,
v.



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Matrices in Haskell

2007-03-20 Thread Vincent Kraeutler
Matthew Brecknell wrote:
 Ivan Miljenovic:
   
 As such, I'd like to know if there's any way of storing a an n-by-n
 matrix such that the algorithm/function to get either the rows or the
 columns is less than O(n^2) like transposition is.  I did try using an
 Array, but my (admittedly hurried and naive) usage of them took longer
 than a list of lists, was more difficult to manipulate, and wasn't
 required separate inverse functions to row and cols.  Also, since I
 need to look all throughout the list of values, it's still O(n^2), but
 this time for both rows and columns.
 

 I'm not sure I see the problem, since any operation that touches all the
 elements of a n-by-n matrix will be at least O(n^2). For such an
 operation, a transposition should just add a constant factor.

 When you tried using Arrays, I presume you used an array indexed by a
 pair (i,j), and just reversed the order of the index pair to switch from
 row-wise to column-wise access? It's hard to see how that would slow you
 down. Perhaps the slowdown was caused by excessive array copying?
 Immutable arrays can be efficient for algorithms that write a whole
 array at once, but usually not for algorithms that modify one element at
 a time.

 I think you'll need to show specific functions that are performing below
 expectation before the list will be able to help.

 For problems like latin squares and sudoku, also try thinking outside
 the matrix. Do you really need to structure the problem in terms of
 rows and columns? What about a set of mutually-intersecting sets of
 cells?

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


   
i think you might also want to check up on

http://en.wikibooks.org/wiki/Haskell/Hierarchical_libraries/Arrays

if you intend to do a significant number of incremental updates, it is my
(not particularly well-informed) understanding that you should use either
mutable arrays (Data.Array.ST  together with runST), or Data.Array.Diff
with explicit sequencing.

both approaches are discussed in the above wikipedia entry.

cheers,
v.



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe


Re: [Haskell-cafe] Matrices in Haskell

2007-03-20 Thread Vincent Kraeutler
Claus Reinke wrote:
 When you tried using Arrays, I presume you used an array indexed by a
 pair (i,j), and just reversed the order of the index pair to switch from
 row-wise to column-wise access? It's hard to see how that would slow you
 down. Perhaps the slowdown was caused by excessive array copying?

 the difference can be in locality wrt the memory hierarchy: is the
 next element
 nearby most of the time (apart from array borders), or a
 row-/column-width away most of the time?

 i understand that, eg, fortran and c differ in their default
 interpretations of array layout, so naively translated benchmarks
 might suffer from running against the grain in one of the two
 (row-major loops over a column-major layout, or the
 other way round).

 claus

 ___
 Haskell-Cafe mailing list
 Haskell-Cafe@haskell.org
 http://www.haskell.org/mailman/listinfo/haskell-cafe


it seems unlikely to me that this would cause a degradation in
performance with respect to lists...
or is there something very interesting to learn about the inner workings
of linked lists in ghc?

regards,
v.



signature.asc
Description: OpenPGP digital signature
___
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe