On Mon, Feb 21, 2011 at 16:49, dexen deVries <dexen.devr...@gmail.com> wrot= e: > Hi list, > > On Monday 21 of February 2011 19:58:32 you wrote: >> =A0 =A0Articles & Essays =A0-> =A0Array Abstinence >> >> or at the direct link "http://picolisp.com/5000/-2-1d.html". > > > The article lays it out all clearly and neatly, with one caveat I'd like = to > mention. > > <rant mode on> > Alex writes, ``Given a number, the associated data can be found [in array= ] in > O(1) time.'' > > That holds true if, and only if, the relevant part of the array is alread= y > present in L1 cache. Otherwise, about 200...300 CPU cycles  will be wa= sted > waiting for the cache to be filled before data is available to the CPU. T= his > wasted time actually dominates the execution time, in pathological cases. > > The data will be in cache most of the time for small arrays, but will NOT= be > in cache for larger arrays. So if you think ``for large ammouts of data a= rrays > will be significantly faster'', don't -- there's significant overhead rel= ated to > cache loads. To the contrary, traversing PicoLisp's linked list will be q= uite > fast. > > Now if you access the array in a semi-random order, so-called `cache tras= hing' > will ensue. Pretty much like reading random data from the harddrive (for > example from swap). The CPU, starved of data, will idle uselessly. > > On the other hand, if you access the array it sequentially, it's not bett= er > than simply traversing the PicoLisp linked list. You can even reasonably > expect the CPU to pre-fetch relevant data into the cache. > > If you really, positively, want to index a huge number of data, use a bin= ary > search tree with complexity of O(log n) on average. Building BSTs from > PicoLisp lists is easy :) > </rant mode off> > > > To wrap it up: surprisingly, due to modern memory/CPU speed disparity, ra= ndom > access to data in an array isn't all that fast. Theoretically O(1), but i= n > practice it will incur high overhead of cache misses. > Traversing a list is fast enough unless you have huge datasets. In that c= ase, > use a binary search tree. > > >  http://lwn.net/Articles/252125/ > (& some other publication by some Sun guy) > :D that should be in the essay! anyway, one of the reasons people (me included) seem to favor arrays is not efficiency, but that some algorithms deeply wired into our brains are more clearly represented using arrays instead of single linked lists. But that's just matter of being used to a way of thinking, i guess we just need to learn how to write clear code using lists instead of arrays, perhaps favoring other algorithms, Alex's work on RosettaCode helps a lot with that :], I surprised myself when i could code that BF interpreter using only lists with fairly concise and clear code.
> -- > dexen deVries Cheers, Jos=E9 -- UNSUBSCRIBE: mailto:email@example.com?subject=Unsubscribe