I personally have written a lot of code that relies on the slice operator being extremely cheap, which is the whole point of the way D arrays are designed. For example, using slicing and tail recursion instead of indices and looping is a very elegant, readable way of implementing binary search. I'm not sure we want to add any overhead here, even if it's only a few instructions.
On Mon, Mar 22, 2010 at 11:04 AM, Andrei Alexandrescu <[email protected]>wrote: > Right now array append is reasonably fast due to Steve's great work. > Basically the append operation caches the capacities of the last arrays that > were appended to, which makes the capacity query fast most of the time. > > I was thinking of a possible change. How about having the slice operator > arr[a .. b] remove the array from the cache? That way we can handle > arr.length = n differently: > > (a) if n > arr.length, resize appropriately > (b) if n < arr.length AND the array is in the cache, keep the array in the > cache. > > The change is at (b). If the array is in the cache and its length is made > smaller, then we can be sure the capacity will stay correct after the > resizing. This is because we know for sure there will be no stomping - if > stomping were possible, the array would not be in the cache. > > Could this work? And if it does, would it be a good change to make? > > > Andrei > _______________________________________________ > phobos mailing list > [email protected] > http://lists.puremagic.com/mailman/listinfo/phobos >
_______________________________________________ phobos mailing list [email protected] http://lists.puremagic.com/mailman/listinfo/phobos
