To say nothing of the fact that they, wisely, throw massive amounts of hardware at the problem. They can do this in part because they can amortize the cost over a massive user database. I don't think anyone using our tools is going to be in a remotely similar situation.


On Mar 25, 2007, at 1:39 PM, Christian Theune wrote:

Google does somehow also the batching hellfast.

Google is a bad partner to compare with when talking about DBMS
efficiency. Google allows sloppy and "wrong" results in trade of for

(E.g. they update their indexes distributetly and do allow different
results to be returned for your search each time you search, they just
don't care.)

This allows them to do heavy caching and use their map/lambda recipe on a
more localized set of data.

Jim Fulton                      mailto:[EMAIL PROTECTED]                Python 
CTO                             (540) 361-1714         
Zope Corporation   

Zope3-dev mailing list

Reply via email to