On 12/04/2015 02:27 AM, Andrei Alexandrescu wrote:
Consider the collections universe. So we have an imperative primitive like:

c.insertAfter(r, x)

where c is a collection, r is a range previously extracted from c, and x
is a value convertible to collection's element type. The expression
imperatively inserts x in the collection right after r.

Now this primitive may have three complexities:

* linear in the length of r (e.g. c is a singly-linked list)

* linear in the number of elements after r in the collection (e.g. c is
an array)

* constant (c is a doubly-linked list)

These complexities must be reflected in the name of the primitives. Or
perhaps it's okay to conflate a couple of them. I know names are
something we're awfully good at discussing :o). Destroy!


Andrei

Regarding nomenclature, it would be awesome if we could call this "running time" instead of "complexity". Algorithms have running times and memory usages. Problems have (time and space) complexities. I know that calling running time 'complexity' is awfully popular outside of actual complexity theory papers. I don't really understand what calling it 'complexity' actually buys though.

Reply via email to