I know that array copy is much more expensive than a single update. But when you say:
On Wed, 25 Feb 2004, Simon Peyton-Jones wrote: > Haskell's current story is to use O(log n) structures such as trees -- Yes, I got that. The question is how I trade off between reads and writes. If I write very infrequently I may be willing to pay for varying levels of more arrayness (lower read time, higher update time). But in managing this tradeoff, what is faster: * constructing/destructing e.g. 16 trees (for a 65000 item table) * 2 memcpy of 256 item arrays (perhaps after you primop?) If the later is not dramatically slower than I will bias towards more arrayness. In this context, I believe that mmx insns in modern CPUs dramatically accelerate in-cache memcpy. If that is the case, then e.g. 256 element arrays could be optimal. > You should feel no need to apologise. Our community *needs* people like > you, who are using Haskell for real applications. That's how we'll > learn "where the shoe pinches". Ask away. Thank you. -Alex- _________________________________________________________________ S. Alexander Jacobson mailto:[EMAIL PROTECTED] tel:917-770-6565 http://alexjacobson.com _______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell