Matt Weinstein wrote:

> Cache-oblivious is sort of divide-and-conquer or recursive-descent --- 
> it's clever partitioning so the inner loops are always working in very 
> tight spaces, careful CPU instruction cache use, taking advantage of the 
> fact that you get 70-500 instructions per cache line fetch (this is in 
> the literature).  In the case of string matching, you'd reduce the 
> search space as quickly as possible by taking the most diverse component 
> first, e.g.
>         "ABC" "ADE" "AFF" "ABG"
> You'd start by looking at [2] instead of [0] because you prune earlier.

I would say even if you have the 4 subscriptions above, you want to look 
at [0] first because the string can be say "BCD" and should be dropped 
immediately.

Anyway, the only way to prove one algortihm is better than another is to 
implement and benchmark it.

Martin
_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to