On 04/11/2009, at 13:35, wren ng thornton wrote:
Roman Leshchinskiy wrote:
On 04/11/2009, at 13:23, Daniel Peebles wrote:
In the presence of fusion (as is the case in uvector), it's hard to
give meaningful time complexities for operations as they depend on
what operations they are paired with. We need to think of a better
way
to express this behavior in the documentation though.
I have to disagree here. Fusion never makes the complexity of
operations worse. If it does, it's a bug.
I think the point was more that the relevant complexity bound can
change in the presence of fusion. For a poor example: the first map
over a list is O(n) but all subsequent ones in a chain of maps are
O(1) with fusion. I'm sure there are better examples than that, but
you get the idea. Some people may care to know about that latter
complexity rather than just the "independent" complexity.
I think asymptotic complexity is the wrong tool for what you're trying
to do. You implement your algorithm using operations with known
complexities. This allows you to compute the complexity of the entire
algorithm. That's all you can use operation complexities for. The
compiler is then free to optimise the algorithm as it sees fit but is
supposed to preserve (or improve) its complexity. It is not guaranteed
or even supposed to preserve the original operations. To stay with
your example, each of the two maps is linear regardless of whether
fusion happens. Executing the two maps, be it one after another or
interlocked, is linear simply because O(n) + O(n) = O(n), not because
of fusion.
Essentially, you're trying to use complexity to describe an
optimisation which doesn't actually affect the complexity.
Roman
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe