mastrost wrote:
the attached source code shows that *9/8 is almost as fast as *2 .No it is not. I mean it is certainly true for your very particular problem, but in fact it is not even the same order of complexity. Let's take this code: void main(){ int n=100;int[] myArray;assert(myArray.length == 0); for(auto i=0; i<n ; ++i){ myArray ~= 1; } assert(myArray.length == n); } Now assume k1 and k2 are constant number. If you reallocate the memory using (requiredSize*k1), you will do about 'n' reallocations, wich means 'n' copies of the buffer, which cost you 'n' each time you do so. So your algo costs about n^2 operations. If you reallocate the memory using (oldSize * K2) you will do about 'log(n)' reallocations. That is why the stl is using (oldSize * 2). When using (oldSize*k2), the memory used is the same order of magnitude than with (requiredSize * k1), but the execution time becomes n*log(n) instead of n^2. This is just an asymptotic behaviour (this becomes more and more true if n is big), so in real life, it is not necessary true that oldSize*2 is really better. But if you cannot be sure that (requiredSize * K1) is better than (oldSize * K2), I think you should use (oldSize * k2). I have been tought this stuffs a couple of years ago, so I might be mistaken. Moreover I did not make the experience myself. regards
Given that requiredSize == oldSize+1, I don't see any difference between the schemes...
