Thu, 4 Nov 1999 11:07:54 +0000 (GMT), Peter Hancock <[EMAIL PROTECTED]> pisze:
> Would it really be helpful to guarantee it formally? I can't think
> of *any* implementation which will guarantee constant time access *for
> arbitrarily large addresses* - up to say 2 `exp` (2 `exp` 128).
I don't know. Here is how C++ tries to do it
<http://www.sgi.com/Technology/STL/complexity.html>
(I'm not sure about the URL):
[...]
It is difficult to specify precisely when an algorithm satisfies a
performance constraint. Does copying a vector on a 16-bit embedded
processor take constant time? After all, the size of the [7]vector is
limited to some value less than 65,536. Thus the number of memory
operations involved in the copy operation is certainly bounded by a
constant. It is even conceivable that the worst case vector copy time
on this processor may be less than the worst-case time for a single
memory access on a machine with paged virtual memory. Nonetheless, it
would be intuitively wrong to describe a vector copy or a list
traversal as being a constant time operation. Even on this machine, a
vector implemented as a list is unlikely to yield satisfactory
performance. (Of course, so would an implementation that looped for a
second for every vector access, although that would clearly run in
constant time. The point here is to communicate the proper intent
between implementor and user, not to guard against malicious or silly
implementations.)
Fundamentally, it is difficult to define the notion of asymptotic
algorithm complexity precisely for real computer hardware instead of
an abstract machine model. Thus we settle for the following
guidelines:
1. For an algorithm A to have running time O(f(n)), there must be a
corresponding algorithm A' that is correct on machines with
arbitrarily long pointer and size_t types, such that A and A'
perform essentially the same sequence of operations on the actual
hardware. (In simple cases A and A' will be the same. In other
cases A may have been simplified with the knowledge that adresses
are bounded.) For inputs of sufficiently large size n, A' must
take at most time Cf(n), where C is a constant, independent of
both n and the address size. (Pointer, size_t, and ptrdiff_t
operations are presumed to take constant time independent of their
size.)
2. All container or iterator complexity specifications refer to
amortized complexity. An individual operation may take longer than
specified. But any sufficiently long sequence of operations on the
same container or iterator will take at most as long as the
corresponding sum of the specified operation costs.
3. Algorithms specify either worst-case or average case performance,
and identify which. Unless otherwise stated, averages assume that
container elements are chosen from a finite type with more
possible values than the size of the container, and that container
elements are independently uniformly distributed.
4. A complexity specification for an operation f assumes that
operations invoked by f require at most the specified runtime. But
algorithms generally remain appropriate if the invoked operations
are no more than a logarithmic factor slower than specified in the
expected case.
5. If operations are more expensive than assumed by a function F in
the current STL, then F will slow down at most in proportion to
the added cost. Any future operations that fail to satisfy this
property will make that explicit.
To make this precise, assume F is specified to use time f(m) for
input of size m. F uses operations Gk, with specified running
times gk(n) on input size n. If F is used in a context in which
each Gk is slower than expected by at most a factor h(n), then F
slows down by at most a factor h(m). This holds because none of
the current algorithms ever apply the operations Gk to inputs
significantly larger than m.
--
__("< Marcin Kowalczyk * [EMAIL PROTECTED] http://qrczak.ids.net.pl/
\__/ GCS/M d- s+:-- a22 C+++>+++$ UL++>++++$ P+++ L++>++++$ E-
^^ W++ N+++ o? K? w(---) O? M- V? PS-- PE++ Y? PGP->+ t
QRCZAK 5? X- R tv-- b+>++ DI D- G+ e>++++ h! r--%>++ y-