Why fact2 is quicker than fact?!
fact2 :: Integer - Integer
fact2 x = f x y
where
f n e | n 2 = 1
| e == 0 = n * (n - 1)
| e 0 = (f n (e `div` 2)) * (f (n - (e * 2)) (e `div` 2))
y = 2 ^ (truncate (log (fromInteger x) / log 2))
fact :: Integer - Integer
fact 1 = 1
fact n = n * fact (n - 1)
I
fact2 is O(log n) while fact is O(n).
fact2 is partitioning the problem.
On Wed, Dec 30, 2009 at 08:57, Artyom Kazak artyom.ka...@gmail.com wrote:
Why fact2 is quicker than fact?!
fact2 :: Integer - Integer
fact2 x = f x y
where
f n e | n 2 = 1
| e == 0 = n * (n - 1)
| e 0 = (f n (e
fact2 is O(log n) while fact is O(n).
fact2 is partitioning the problem.
But fact2 sparks off two recursive calls. If we assume that
multiplication is a constant-time operations, both algorithms
have the same asymptotic running time (try a version that
operates on Ints). For Integers, this
fact2 sparks 2*n multiplications for every (n^2) factors
fact sparks n multiplications for every n factors
On Wed, Dec 30, 2009 at 10:13, Ralf Hinze ralf.hi...@comlab.ox.ac.ukwrote:
fact2 is O(log n) while fact is O(n).
fact2 is partitioning the problem.
But fact2 sparks off two
fact2 sparks 2*n multiplications for every (n^2) factors
fact sparks n multiplications for every n factors
Okay, let's count:
data Tree a = Leaf a | Fork (Tree a) (Tree a)
deriving (Show)
fact 1 = Leaf 1
fact n = Leaf n `Fork` fact (n - 1)
fact2 x = f x y
where
f n e | n 2 =
I can't offer much insight but could the answer lie in the Integer
type? I suspect that with a sufficiently large fixed Int type (2^14
bits?) the performance of the two functions would be almost equal.
Could it be that the second function delays the multiplication of
large numbers as long as
Am Mittwoch 30 Dezember 2009 11:57:28 schrieb Artyom Kazak:
Why fact2 is quicker than fact?!
fact2 :: Integer - Integer
fact2 x = f x y
where
f n e | n 2 = 1
| e == 0 = n * (n - 1)
| e 0 = (f n (e `div` 2)) * (f (n - (e * 2)) (e `div` 2))
y = 2 ^ (truncate (log (fromInteger x) / log
Am Mittwoch 30 Dezember 2009 17:28:37 schrieb Roel van Dijk:
I can't offer much insight but could the answer lie in the Integer
type? I suspect that with a sufficiently large fixed Int type (2^14
bits?) the performance of the two functions would be almost equal.
For fact (10^6), we'd need
As an aside, in one of my libraries I have a combinator
for folding a list in a binary-subdivision scheme.
foldm :: (a - a - a) - a - [a] - a
foldm (*) e x
| null x= e
| otherwise = fst (rec (length x) x)
where rec 1 (a :
On Wed, Dec 30, 2009 at 10:35 AM, Ralf Hinze ralf.hi...@comlab.ox.ac.uk wrote:
As an aside, in one of my libraries I have a combinator
for folding a list in a binary-subdivision scheme.
foldm :: (a - a - a) - a - [a] - a
I would use:
foldm :: Monoid m = [m] - m
Which
I would use:
foldm :: Monoid m = [m] - m
Which is just a better implementation of mconcat / fold. The reason I
prefer this interface is that foldm has a precondition in order to
have a simple semantics: the operator you're giving it has to be
associative. I like to use typeclasses to
11 matches
Mail list logo