What's the rationale behind array appending adding a page size to capacity instead of doubling, after the size of the array reaches a certain size (cf what std::vector does) ? That seems like an example of (evil?) early optimization.

----
T[]a;
int n=some big value;
foreach(i;0..n)
    a~=T.init;
----
Depending on T.sizeof or n, the code above will be linear complexity in n (while capacity remains below page size) or quadratic (while capacity increments are one page at a time).

The C++ version doesn't have this problem, as doubling is always the case (in most implementations at least).
----
std::vector<T>a;
int n=some big value;
for(int i=0;i<n;i++)
    a.push_back(T());
----

Note that we can't always use a.reserve(n) as in more complex examples, we don't know final size. And appender is more restrictive. Would love some explanations!

Reply via email to